一、logback简介
logback是log4j创始人写的,性能比log4j要好,目前主要分为3个模块
- logback-core:核心代码模块
- logback-classic:log4j的一个改良版本,同时实现了
slf4j
的接口,这样你如果之后要切换其他日志组件也是一件很容易的事 - logback-access:访问模块与Servlet容器集成提供通过Http来访问日志的功能
二、logback.xml配置
<?xml version="1.0" encoding="UTF-8"?>
<!--debug 要不要打印 logback内部日志信息,true则表示要打印。建议开启-->
<!--scan 配置发送改变时,要不要重新加载-->
<configuration debug="true" scan="true" scanPeriod="1 seconds">
<contextName>logback</contextName>
<!--定义参数,后面可以通过${app.name}使用-->
<property name="app.name" value="logback_test"/>
<!--ConsoleAppender 用于在屏幕上输出日志-->
<appender name="stdout" class="ch.qos.logback.core.ConsoleAppender">
<!--定义了一个过滤器,在LEVEL之下的日志输出不会被打印出来-->
<!--这里定义了DEBUG,也就是控制台不会输出比ERROR级别小的日志-->
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>DEBUG</level>
</filter>
<!-- encoder 默认配置为PatternLayoutEncoder -->
<!--定义控制台输出格式-->
<encoder>
<pattern>%d [%thread] %-5level %logger{36} [%file : %line] - %msg%n</pattern>
</encoder>
</appender>
<appender name="file" class="ch.qos.logback.core.rolling.RollingFileAppender">
<!--定义日志输出的路径-->
<!--这里的scheduler.manager.server.home 没有在上面的配置中设定,所以会使用java启动时配置的值-->
<!--比如通过 java -Dscheduler.manager.server.home=/path/to XXXX 配置该属性-->
<file>${scheduler.manager.server.home}/logs/${app.name}.log</file>
<!--定义日志滚动的策略TimeBasedRollingPolicy: 最常用的滚动策略,它根据时间来制定滚动策略,
既负责滚动也负责出发滚动。有以下子节点:<fileNamePattern>:必要节点,包含文件名及“%d”转换符,
“%d”可以包含一个java.text.SimpleDateFormat指定的时间格式,如:%d{yyyy-MM}。如果直接使用 %d-->
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<!--定义文件滚动时的文件名的格式-->
<fileNamePattern>${scheduler.manager.server.home}/logs/${app.name}.%d{yyyy-MM-dd.HH}.log.gz
</fileNamePattern>
<!--60天的时间周期,日志量最大20GB-->
<maxHistory>60</maxHistory>
<!-- 该属性在 1.1.6版本后 才开始支持-->
<totalSizeCap>20GB</totalSizeCap>
</rollingPolicy>
<triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
<!--每个日志文件最大100MB-->
<maxFileSize>100MB</maxFileSize>
</triggeringPolicy>
<!--定义输出格式-->
<encoder>
<pattern>%d [%thread] %-5level %logger{36} [%file : %line] - %msg%n</pattern>
</encoder>
</appender>
<!--root是默认的logger 这里设定输出级别是debug-->
<root level="info">
<!--定义了两个appender,日志会通过往这两个appender里面写-->
<appender-ref ref="stdout"/>
<appender-ref ref="file"/>
</root>
<!--对于类路径以 com.example.logback 开头的Logger,输出级别设置为warn,并且只输出到控制台-->
<!--这个logger没有指定appender,它会继承root节点中定义的那些appender-->
<logger name="com.example.logback" level="warn"/>
<!--通过 LoggerFactory.getLogger("mytest") 可以获取到这个logger-->
<!--由于这个logger自动继承了root的appender,root中已经有stdout的appender了,自己这边又引入了stdout的appender-->
<!--如果没有设置 additivity="false" ,就会导致一条日志在控制台输出两次的情况-->
<!--additivity表示要不要使用rootLogger配置的appender进行输出-->
<logger name="mytest" level="info" additivity="false">
<appender-ref ref="stdout"/>
</logger>
<!--由于设置了 additivity="false" ,所以输出时不会使用rootLogger的appender-->
<!--但是这个logger本身又没有配置appender,所以使用这个logger输出日志的话就不会输出到任何地方-->
<logger name="mytest2" level="info" additivity="false"/>
</configuration>
三、实现原理
1、获取LoggerFactory
slf4j委托具体实现框架的StaticLoggerBinder来返回一个ILoggerFactory,从而对接到具体实现框架上,我们看下这个类(省略了部分代码)
public class StaticLoggerBinder implements LoggerFactoryBinder {
private static StaticLoggerBinder SINGLETON = new StaticLoggerBinder();static {
SINGLETON.init();
}public static StaticLoggerBinder getSingleton() {
return SINGLETON;
}/**
* Package access for testing purposes.
*/
void init() {
try {
try {
new ContextInitializer(defaultLoggerContext).autoConfig();
} catch (JoranException je) {
Util.report("Failed to auto configure default logger context", je);
}
// logback-292
if(!StatusUtil.contextHasStatusListener(defaultLoggerContext)) {
StatusPrinter.printInCaseOfErrorsOrWarnings(defaultLoggerContext);
}
contextSelectorBinder.init(defaultLoggerContext, KEY);
initialized = true;
} catch (Throwable t) {
// we should never get here
Util.report("Failed to instantiate [" + LoggerContext.class.getName()
+ "]", t);
}
}
public ILoggerFactory getLoggerFactory() {
if (!initialized) {
return defaultLoggerContext;
}
if (contextSelectorBinder.getContextSelector() == null) {
throw new IllegalStateException(
"contextSelector cannot be null. See also " + NULL_CS_URL);
}
return contextSelectorBinder.getContextSelector().getLoggerContext();
}
}
可以看到
- 1、通过getSingleton()获取该类的单例
- 2、static块来保证初始化调用init()方法
- 3、在init方法中,委托ContextInitializer类对LoggerContext进行初始化。这里如果找到了任一配置文件,就会根据配置文件去初始化LoggerContext,如果没找到,会使用默认配置。
- 4、然后初始化ContextSelectorStaticBinder,在这个类内部new一个DefaultContextSelector,并把第一步中配置完毕的LoggerContext传给DefaultContextSelector
- 5、调用getLoggerFactory()方法,直接返回3中配置的LoggerContext,或者委托DefaultContextSelector类返回LoggerContext
这里可以看出所有的配置均保存在LoggerContext这个类中,只要获取到了该类,就能得到log的所有配置,我们的logger就保存在该类的Map<String, Logger> loggerCache中,key为logger的name.
2、获取logger
public final Logger getLogger(final String name) {
if (name == null) {
throw new IllegalArgumentException("name argument cannot be null");
}
// 如果请求的是ROOT Logger,那么就直接返回root
if (Logger.ROOT_LOGGER_NAME.equalsIgnoreCase(name)) {
return root;
}
int i = 0;
Logger logger = root;
// 请求的Logger是否已经创建过了,如果已经创建过,就直接从loggerCache中返回
Logger childLogger = (Logger) loggerCache.get(name);
// if we have the child, then let us return it without wasting time
if (childLogger != null) {
return childLogger;
}
// if the desired logger does not exist, them create all the loggers
// in between as well (if they don't already exist)
String childName;
while (true) {
int h = LoggerNameUtil.getSeparatorIndexOf(name, i);
if (h == -1) {
childName = name;
} else {
childName = name.substring(0, h);
}
i = h + 1;
synchronized (logger) {
childLogger = logger.getChildByName(childName);
if (childLogger == null) {
//创建Logger实例
childLogger = logger.createChildByName(childName);
loggerCache.put(childName, childLogger);
incSize();
}
}
logger = childLogger;
if (h == -1) {
return childLogger;
}
}
}
3、logger.info()记录日志
slf4j定义了Logger接口记录日志的方法是info()、warn()、debug()等,这些方法只是入口,logback是这样实现这些方法的
public void info(String msg) {
filterAndLog_0_Or3Plus(FQCN, null, Level.INFO, msg, null, null);
}
当客户端代码调用Logger.info()时,实际上会进入filterAndLog_0_Or3Plus方法,Logger类中还有很多名字很相似的方法,比如filterAndLog_1、filterAndLog_2。
/**
* The next methods are not merged into one because of the time we gain by not
* creating a new Object[] with the params. This reduces the cost of not
* logging by about 20 nanoseconds.
*/
private final void filterAndLog_0_Or3Plus(final String localFQCN,
final Marker marker, final Level level, final String msg,
final Object[] params, final Throwable t) {
final FilterReply decision = loggerContext.getTurboFilterChainDecision_0_3OrMore(marker, this, level, msg, params, t);
if (decision == FilterReply.NEUTRAL) {
if (effectiveLevelInt > level.levelInt) {
return;
}
} else if (decision == FilterReply.DENY) {
return;
}
buildLoggingEventAndAppend(localFQCN, marker, level, msg, params, t);
}
该方法首先要请求TurboFilter来判断是否允许记录这次日志信息。TurboFilter是快速筛选的组件,筛选发生在LoggingEvent创建之前,这种设计也是为了提高性能
如果经过过滤,确定要记录这条日志信息,则进入buildLoggingEventAndAppend方法
private void buildLoggingEventAndAppend(final String localFQCN,
final Marker marker, final Level level, final String msg,
final Object[] params, final Throwable t) {
LoggingEvent le = new LoggingEvent(localFQCN, this, level, msg, t, params);
le.setMarker(marker);
callAppenders(le);
}
在这个方法里,首先创建了LoggingEvent对象,然后调用callAppenders()方法,要求该Logger关联的所有Appenders来记录日志
LoggingEvent对象是承载了日志信息的类,最后输出的日志信息,就来源于这个事件对象
/**
* Invoke all the appenders of this logger.
*
* @param event
* The event to log
*/
public void callAppenders(ILoggingEvent event) {
int writes = 0;
for (Logger l = this; l != null; l = l.parent) {
writes += l.appendLoopOnAppenders(event);
if (!l.additive) {
break;
}
}
// No appenders in hierarchy
if (writes == 0) {
loggerContext.noAppenderDefinedWarning(this);
}
}
经过前面的Filter过滤、日志级别匹配、创建LoggerEvent对象,终于进入了记录日志的方法。该方法会调用此Logger关联的所有Appender,而且还会调用所有父Logger关联的Appender,直到遇到父Logger的additive属性设置为false为止,这也是为什么如果子Logger和父Logger都关联了同样的Appender,则日志信息会重复记录的原因
private int appendLoopOnAppenders(ILoggingEvent event) {
if (aai != null) {
return aai.appendLoopOnAppenders(event);
} else {
return 0;
}
}
实际上调用的AppenderAttachableImpl的appendLoopOnAppenders()方法
/**
* Call the <code>doAppend</code> method on all attached appenders.
*/
public int appendLoopOnAppenders(E e) {
int size = 0;
r.lock();
try {
for (Appender<E> appender : appenderList) {
appender.doAppend(e);
size++;
}
} finally {
r.unlock();
}
return size;
}
到这里,为了记录一条日志信息,长长的调用链终于告一段落了,通过调用Appender的doAppend(LoggingEvent e)方法,委托Appender来最终记录日志
UnsynchronizedAppenderBase里面的doAppend()方法,它主要是记录了Status状态,然后检查Appender上的Filter是否满足过滤条件,最后再调用实现子类的appender()方法。很眼熟是吗,这里用到了一个设计模式——模板方法
public void doAppend(E eventObject) {
// WARNING: The guard check MUST be the first statement in the
// doAppend() method.
// prevent re-entry.
if (Boolean.TRUE.equals(guard.get())) {
return;
}
try {
guard.set(Boolean.TRUE);
if (!this.started) {
if (statusRepeatCount++ < ALLOWED_REPEATS) {
addStatus(new WarnStatus(
"Attempted to append to non started appender [" + name + "].",
this));
}
return;
}
if (getFilterChainDecision(eventObject) == FilterReply.DENY) {
return;
}
// ok, we now invoke derived class' implementation of append
this.append(eventObject);
} catch (Exception e) {
if (exceptionCount++ < ALLOWED_REPEATS) {
addError("Appender [" + name + "] failed to append.", e);
}
} finally {
guard.set(Boolean.FALSE);
}
}
abstract protected void append(E eventObject);
上面的代码非常简单,就不用说了,我们就直接看看实现类的append()方法是怎么实现的,这里我们选择OutputStreamAppender实现类
@Override
protected void append(E eventObject) {
if (!isStarted()) {
return;
}
subAppend(eventObject);
}
首先检查一下这个Appender是否已经启动,如果没启动就直接返回,如果已经启动,则又进入一个subAppend()方法
RollingFileAppender覆盖了subAppend()方法,实现了翻滚策略
@Override
protected void subAppend(E event) {
// The roll-over check must precede actual writing. This is the
// only correct behavior for time driven triggers.
// We need to synchronize on triggeringPolicy so that only one rollover
// occurs at a time
synchronized (triggeringPolicy) {
if (triggeringPolicy.isTriggeringEvent(currentlyActiveFile, event)) {
rollover();
}
}
super.subAppend(event);
}
进入super.subAppend(event);
protected void subAppend(E event) {
if (!isStarted()) {
return;
}
try {
// this step avoids LBCLASSIC-139
if (event instanceof DeferredProcessingAware) {
((DeferredProcessingAware) event).prepareForDeferredProcessing();
}
// the synchronization prevents the OutputStream from being closed while we
// are writing. It also prevents multiple threads from entering the same
// converter. Converters assume that they are in a synchronized block.
// lock.lock();
byte[] byteArray = this.encoder.encode(event);
writeBytes(byteArray);
} catch (IOException ioe) {
// as soon as an exception occurs, move to non-started state
// and add a single ErrorStatus to the SM.
this.started = false;
addStatus(new ErrorStatus("IO failure in appender", this, ioe));
}
}
通过this.encoder.encode(event)方法格式化需要记录的日志,然后通过writeBytes(byteArray)写入日志。
四、通过代码动态生成logger对象
public class LoggerHolder {
public static Logger getLogger(String name) {
LoggerContext loggerContext = (LoggerContext) LoggerFactory.getILoggerFactory();
//如果未创建该logger
if (loggerContext.exists(name) == null) {
return buildLogger(name);
}
//如果已经创建,则返回
return loggerContext.getLogger(name);
}
private static Logger buildLogger(String name) {
LoggerContext loggerContext = (LoggerContext) LoggerFactory.getILoggerFactory();
Logger logger = loggerContext.getLogger(name);
//配置rollingFileAppender
RollingFileAppender rollingFileAppender = new RollingFileAppender();
rollingFileAppender.setName(name);
//配置rollingPolicy
TimeBasedRollingPolicy rollingPolicy = new TimeBasedRollingPolicy();
rollingPolicy.setFileNamePattern("/data/pjf/" + name + "/" + name + ".%d{yyyyMMdd}.log");
rollingFileAppender.setRollingPolicy(rollingPolicy);
//配置encoder
PatternLayoutEncoder encoder = new PatternLayoutEncoder();
encoder.setCharset(UTF_8);
encoder.setPattern("%msg%n");
rollingFileAppender.setEncoder(encoder);
//配置logger
logger.addAppender(rollingFileAppender);
logger.setAdditive(false);
logger.setLevel(Level.INFO);
return logger;
}
}
具体参考:写的很详细,
读logback源码系列文章(一)——对接slf4j
读logback源码系列文章(二)——提供ILoggerFactory
读logback源码系列文章(三)——创建Logger
读logback源码系列文章(四)——记录日志
读logback源码系列文章(五)——Appender
读logback源码系列文章(六)——ContextInitializer
读logback源码系列文章(七)——配置的实际工作类Action
读logback源码系列文章(八)——记录日志的实际工作类Encoder