Logback将我的日志信息丢失到文件中[英] Logback losing my log messages to file

本文是小编为大家收集整理的关于Logback将我的日志信息丢失到文件中的处理/解决方法,可以参考本文帮助大家快速定位并解决问题,中文翻译不准确的可切换到English标签页查看源文。

问题描述

我编写了一个测试程序,以验证通过log4j对记录的性能改进.但是令我惊讶的是,我遇到了这个奇怪的问题.我正在使用其ASYNC和文件附录中将一些200k日志消息写入文件.但是,每次,它仅记录约140k左右的消息,然后停止.它只是打印我的最后一个日志语句,表明它已将所有内容写入缓冲区,并且程序终止.如果我只使用log4j运行相同的程序,我可以在日志文件中看到所有200k消息.是否有任何根本的建筑差异使这种情况发生?无论如何是否可以避免它?我们正在考虑从log4j切换到logback,现在这使我重新思考.

这是我的记录configuraiton:

<configuration>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
    <encoder>
        <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n
        </pattern>
    </encoder>
</appender>

<appender name="FILE" class="ch.qos.logback.core.FileAppender">
<file>logback.log</file>
<encoder>
  <pattern>%date %level [%thread] %logger{10} [%file:%line] %msg%n</pattern>
</encoder>
 </appender>

<appender name="ASYNC" class="ch.qos.logback.classic.AsyncAppender">
    <appender-ref ref="FILE" />
</appender>

<root level="info">
    <appender-ref ref="ASYNC" />
</root>
 </configuration>

这是我的代码---------------------

      public static void main(String[] args) throws InterruptedException {
        org.slf4j.Logger logbackLogger = LoggerFactory
                .getLogger(LogbackTest.class);

        List<Integer> runs = Arrays.asList(1000, 5000, 50000, 200000);
        ArrayList<Long> logbackRuntimes = new ArrayList<>(4);

        for (int run = 0; run < runs.size(); run++) {
            logbackLogger.info("------------------------>Starting run: "
                    + (run + 1));
            // logback test
            long stTime = System.nanoTime();
            int i = 0;
            for (i = 1; i <= runs.get(run); i++) {
                Thread.sleep(1);
                logbackLogger
                .info("This is a Logback test log, run: {},     iter: {}",
                                run, i);
            }
            logbackRuntimes.add(System.nanoTime() - stTime);
            logbackLogger.info("logback run - " + (run + 1) + " " + i);
        }
        Thread.sleep(5000);
        // print results
        logbackLogger.info("Run times:");
        logbackLogger
            .info("Run\tNoOfMessages\tLog4j Time(ms)\tLogback Time(ms)");
        for (int run = 0; run < runs.size(); run++) {
            logbackLogger.info((run + 1) + "\t" + runs.get(run) + "\t"
                    + logbackRuntimes.get(run) / 10e6d);
        }
    }

推荐答案

根据文档:

[...]默认情况下,当不到20%的队列小节剩余时,Asyncappender将删除级别跟踪,调试和信息的事件,仅保留级别的事件警告和错误.该策略可确保在排级,调试和信息的成本丢失事件中,在队列容量少于20%的情况下,在成本丢失的事件中,可确保对记录事件的无障碍处理(因此表现出色).可以通过将丢弃的属性设置为0(零)来防止事件损失.

其他推荐答案

您正在使用ASYNC appender.有两个与此相关的重要属性:queueSize和neverBlock.

neverBlock是true如果队列已满,则将删除日志消息.在这种情况下,您需要增加queueSize.当neverBlock是false时,应用程序线程将被阻止记录新事件,直到队列具有一些自由空间为止.

以下是设置这些属性的异步appender的示例:

<appender name="ASYNC" class="ch.qos.logback.classic.AsyncAppender">
  <appender-ref ref="FILE" />
  <queueSize>1024</queueSize> <!-- default 256 -->
  <neverBlock>true</neverBlock> <!-- default false, set to true to cause the Appender not block the application and just drop the messages -->
</appender>

本文地址:https://www.itbaoku.cn/post/1574887.html

问题描述

I wrote a test program to verify the performance improvements of logback over log4j. But to my surprise, I ran into this strange problem. I am writing some 200k log messages in a loop to a file using their Async and file appenders. But, every time, it only logs some 140k or so messages and stops after that. It just prints my last log statement indicating that it has written everything in the buffer and the program terminates. If I just run the same program with Log4j, i can see all 200k messages in the log file. Is there any fundamental architectural differences making this happen? Is there anyway to avoid it? We are thinking switching from log4j to logback and now this is making me re-think.

This is my logback configuraiton:

<configuration>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
    <encoder>
        <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n
        </pattern>
    </encoder>
</appender>

<appender name="FILE" class="ch.qos.logback.core.FileAppender">
<file>logback.log</file>
<encoder>
  <pattern>%date %level [%thread] %logger{10} [%file:%line] %msg%n</pattern>
</encoder>
 </appender>

<appender name="ASYNC" class="ch.qos.logback.classic.AsyncAppender">
    <appender-ref ref="FILE" />
</appender>

<root level="info">
    <appender-ref ref="ASYNC" />
</root>
 </configuration>

This is my code ------------------

      public static void main(String[] args) throws InterruptedException {
        org.slf4j.Logger logbackLogger = LoggerFactory
                .getLogger(LogbackTest.class);

        List<Integer> runs = Arrays.asList(1000, 5000, 50000, 200000);
        ArrayList<Long> logbackRuntimes = new ArrayList<>(4);

        for (int run = 0; run < runs.size(); run++) {
            logbackLogger.info("------------------------>Starting run: "
                    + (run + 1));
            // logback test
            long stTime = System.nanoTime();
            int i = 0;
            for (i = 1; i <= runs.get(run); i++) {
                Thread.sleep(1);
                logbackLogger
                .info("This is a Logback test log, run: {},     iter: {}",
                                run, i);
            }
            logbackRuntimes.add(System.nanoTime() - stTime);
            logbackLogger.info("logback run - " + (run + 1) + " " + i);
        }
        Thread.sleep(5000);
        // print results
        logbackLogger.info("Run times:");
        logbackLogger
            .info("Run\tNoOfMessages\tLog4j Time(ms)\tLogback Time(ms)");
        for (int run = 0; run < runs.size(); run++) {
            logbackLogger.info((run + 1) + "\t" + runs.get(run) + "\t"
                    + logbackRuntimes.get(run) / 10e6d);
        }
    }

推荐答案

According to the documentation:

[...] by default, when less than 20% of the queue capacilty remains, AsyncAppender will drop events of level TRACE, DEBUG and INFO keeping only events of level WARN and ERROR. This strategy ensures non-blocking handling of logging events (hence excellent performance) at the cost loosing events of level TRACE, DEBUG and INFO when the queue has less than 20% capacity. Event loss can be prevented by setting the discardingThreshold property to 0 (zero).

其他推荐答案

You're using ASYNC appender. There are 2 important attributes related to this: queueSize and neverBlock.

When neverBlock is true the log messages will be dropped if the queue is full. In this case you'll need to increase the queueSize. When neverBlock is false the application threads will be blocked from logging new events until the queue has some free space.

Here's an example of ASYNC appender which sets those attributes:

<appender name="ASYNC" class="ch.qos.logback.classic.AsyncAppender">
  <appender-ref ref="FILE" />
  <queueSize>1024</queueSize> <!-- default 256 -->
  <neverBlock>true</neverBlock> <!-- default false, set to true to cause the Appender not block the application and just drop the messages -->
</appender>