I have a dropwizard app, where I configured logger appenders to file as follows:
logging:
level: INFO
loggers:
\"mylogger\": INFO
\"com.path.to.
You can implement separate logger with the dropwizard using logback.
1.Configure logger in your Application class (i.e application start point with main method) like below.
LoggerContext context = (LoggerContext)LoggerFactory.getILoggerFactory();
context.reset();
ContextInitializer initializer = new ContextInitializer(context);
initializer.autoConfig();
2.Configure logback.xml like below.
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<appender name="OpLogger " class="ch.qos.logback.core.FileAppender">
<file>/var/log/applicationname-mylogger.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<!-- daily rollover -->
<fileNamePattern>logFile.%d{yyyy-MM-dd}.log</fileNamePattern>
<!-- keep 30 days' worth of history -->
<maxHistory>30</maxHistory>
</rollingPolicy>
<append>false</append>
<encoder>
<pattern>%-5relative %-5level %logger{35} - %msg%n</pattern>
</encoder>
</appender>
<appender name="classLogger" class="ch.qos.logback.core.FileAppender">
<file>/var/log/applicationame-com.path.to.class.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<!-- daily rollover -->
<fileNamePattern>logFile.%d{yyyy-MM-dd}.log</fileNamePattern>
<!-- keep 30 days' worth of history -->
<maxHistory>30</maxHistory>
</rollingPolicy>
<append>false</append>
<encoder>
<pattern>%-5relative %-5level %logger{35} - %msg%n</pattern>
</encoder>
</appender>
<logger name="mylogger">
<level value="INFO" />
<appender-ref ref="OpLogger" />
</logger>
<logger name="com.path.to.class">
<level value="INFO" />
<appender-ref ref="classLogger" />
</logger>
</configuration>
3.Now use logger
static final Logger OpLogger = LoggerFactory.getLogger("mylogger");
static final Logger classLogger = LoggerFactory.getLogger("com.path.to.class");
EDIT :
I have try to implement the same logger in my example project. It works fine in my case. We can not use the LOGGER before the Dropwizard application initialize. The Dropwizard initialized only when you call
new ExampleApplication().run(args);
So, if logger is used before Dropwizard initialized, your log will not be there. I have tried to implemented scenario with main method. The first log statement is not printed as we have used logger before the Dropwizard initialization, but the second log statement will be printed.
OpLogger.info("test 1");
new ExampleApplication().run(args);
ClassLogger.info("test 2);
Hope this will help you out to solve your problem.
UPDATE The latest version of dropwizard supports logging configurations out of the box
I ran into the same issue trying to set up Dropwizard (0.8.4) with a separate files. I ran into the same issue. So I dug a bit deeper and found a solution for me (not the cleanest but I couldn't seem to get that working differently).
The issue is that LoggingFactory#configure
automatically adds every appender to root. This is not very ideal so it needed overwriting. What I did was:
LoggingFactory
. This is slightly messy since there is a few things that need to be copied sadly :( Here is my implementation:
import java.io.PrintStream;
import java.lang.management.ManagementFactory;
import java.util.Map;
import javax.management.InstanceAlreadyExistsException;
import javax.management.MBeanRegistrationException;
import javax.management.MBeanServer;
import javax.management.MalformedObjectNameException;
import javax.management.NotCompliantMBeanException;
import javax.management.ObjectName;
import org.slf4j.LoggerFactory;
import org.slf4j.bridge.SLF4JBridgeHandler;
import com.codahale.metrics.MetricRegistry;
import com.codahale.metrics.logback.InstrumentedAppender;
import com.fasterxml.jackson.annotation.JsonIgnore;
import com.fasterxml.jackson.annotation.JsonProperty;
import com.google.common.collect.ImmutableMap;
import ch.qos.logback.classic.Level;
import ch.qos.logback.classic.Logger;
import ch.qos.logback.classic.LoggerContext;
import ch.qos.logback.classic.PatternLayout;
import ch.qos.logback.classic.jmx.JMXConfigurator;
import ch.qos.logback.classic.jul.LevelChangePropagator;
import ch.qos.logback.classic.spi.ILoggingEvent;
import ch.qos.logback.core.Appender;
import ch.qos.logback.core.util.StatusPrinter;
import io.dropwizard.logging.AppenderFactory;
import io.dropwizard.logging.LoggingFactory;
public class BetterDropWizardLoggingConfig extends LoggingFactory {
@JsonIgnore
final LoggerContext loggerContext;
@JsonIgnore
final PrintStream configurationErrorsStream;
@JsonProperty("loggerMapping")
private ImmutableMap<String, String> loggerMappings;
private static void hijackJDKLogging() {
SLF4JBridgeHandler.removeHandlersForRootLogger();
SLF4JBridgeHandler.install();
}
public BetterDropWizardLoggingConfig() {
PatternLayout.defaultConverterMap.put("h", HostNameConverter.class.getName());
this.loggerContext = (LoggerContext) LoggerFactory.getILoggerFactory();
this.configurationErrorsStream = System.err;
}
private Logger configureLevels() {
final Logger root = loggerContext.getLogger(org.slf4j.Logger.ROOT_LOGGER_NAME);
loggerContext.reset();
final LevelChangePropagator propagator = new LevelChangePropagator();
propagator.setContext(loggerContext);
propagator.setResetJUL(true);
loggerContext.addListener(propagator);
root.setLevel(getLevel());
for (Map.Entry<String, Level> entry : getLoggers().entrySet()) {
loggerContext.getLogger(entry.getKey()).setLevel(entry.getValue());
}
return root;
}
@Override
public void configure(MetricRegistry metricRegistry, String name) {
hijackJDKLogging();
final Logger root = configureLevels();
for (AppenderFactory output : getAppenders()) {
Appender<ILoggingEvent> build = output.build(loggerContext, name, null);
if(output instanceof MappedLogger && ((MappedLogger) output).getLoggerName() != null) {
String appenderName = ((MappedLogger) output).getLoggerName();
String loggerName = loggerMappings.get(appenderName);
Logger logger = this.loggerContext.getLogger(loggerName);
logger.addAppender(build);
} else {
root.addAppender(build);
}
}
StatusPrinter.setPrintStream(configurationErrorsStream);
try {
StatusPrinter.printIfErrorsOccured(loggerContext);
} finally {
StatusPrinter.setPrintStream(System.out);
}
final MBeanServer server = ManagementFactory.getPlatformMBeanServer();
try {
final ObjectName objectName = new ObjectName("io.dropwizard:type=Logging");
if (!server.isRegistered(objectName)) {
server.registerMBean(new JMXConfigurator(loggerContext, server, objectName), objectName);
}
} catch (MalformedObjectNameException | InstanceAlreadyExistsException | NotCompliantMBeanException
| MBeanRegistrationException e) {
throw new RuntimeException(e);
}
configureInstrumentation(root, metricRegistry);
}
private void configureInstrumentation(Logger root, MetricRegistry metricRegistry) {
final InstrumentedAppender appender = new InstrumentedAppender(metricRegistry);
appender.setContext(loggerContext);
appender.start();
root.addAppender(appender);
}
}
As you can se, I sadly had to copy/paste a few private members and methods to make things work as intended.
I added a new field:
@JsonProperty("loggerMapping")
private ImmutableMap<String, String> loggerMappings;
This allows me to configure a mapping for each logger. This wasn't out of the box allowed as I can't get a name (dropwizard defaults the appender names, very inconvenient ...)
So I added a new Logger which in my case also does hostname substitution which I needed for different reasons. For this I overwrite the good old FileAppenderFactory
and implement my own interface MappedLogger
. Implementation here:
import com.fasterxml.jackson.annotation.JsonProperty;
import com.fasterxml.jackson.annotation.JsonTypeName;
import ch.qos.logback.classic.LoggerContext;
import ch.qos.logback.classic.spi.ILoggingEvent;
import ch.qos.logback.core.FileAppender;
import ch.qos.logback.core.rolling.RollingFileAppender;
import io.dropwizard.logging.AppenderFactory;
import io.dropwizard.logging.FileAppenderFactory;
@JsonTypeName("hostnameFile")
public class HostnameFileAppender extends FileAppenderFactory implements AppenderFactory, MappedLogger {
private static String uuid = UUID.randomUUID().toString();
@JsonProperty
private String name;
public void setCurrentLogFilename(String currentLogFilename) {
super.setCurrentLogFilename(substitute(currentLogFilename));
}
private String substitute(final String pattern) {
String substitute = null;
try {
substitute = InetAddress.getLocalHost().getHostName();
} catch (UnknownHostException e) {
System.err.println("Failed to get local hostname:");
e.printStackTrace(System.err);
substitute = uuid;
System.err.println("Using " + substitute + " as fallback.");
}
return pattern.replace("${HOSTNAME}", substitute);
}
@Override
public void setArchivedLogFilenamePattern(String archivedLogFilenamePattern) {
super.setArchivedLogFilenamePattern(substitute(archivedLogFilenamePattern));
}
@Override
public String getLoggerName() {
return name;
}
}
Please note that in order to add a new json type, you will have to follow the JavaDoc in AppenderFactory
(Add Meta-inf to the classpath and make the new appender discoverable)
So far so good, we now have a config that can pick up on logger mappings, we have a logger that can take an optional name.
In the configure method I now tie those two together:
for (AppenderFactory output : getAppenders()) {
Appender<ILoggingEvent> build = output.build(loggerContext, name, null);
if(output instanceof MappedLogger && ((MappedLogger) output).getLoggerName() != null) {
String appenderName = ((MappedLogger) output).getLoggerName();
String loggerName = loggerMappings.get(appenderName);
Logger logger = this.loggerContext.getLogger(loggerName);
logger.addAppender(build);
} else {
root.addAppender(build);
}
}
For backwards compatibility I kept the default behaviour. If there is no name defined, the appender will be added to the root logger. Otherwise I resolve the typed logger and add the appender to it as wished.
And last but not least the good old yaml config:
logging:
# The default level of all loggers. Can be OFF, ERROR, WARN, INFO, DEBUG, TRACE, or ALL.
level: INFO
loggers:
"EVENT" : INFO
loggerMapping:
# for easier search this is defined as: appenderName -> loggerName rather than the other way around
"eventLog" : "EVENT"
appenders:
- type: console
threshold: ALL
logFormat: "myformat"
- type: hostnameFile # NOTE THE NEW TYPE WITH HOSTNAME RESOLVE
currentLogFilename: /Users/artur/tmp/log/my-${HOSTNAME}.log
threshold: ALL
archive: true
archivedLogFilenamePattern: mypattern
archivedFileCount: 31
timeZone: UTC
logFormat: "myFormat"
- type: hostnameFile
name: eventLog # NOTE THE APPENDER NAME
currentLogFilename: something
threshold: ALL
archive: true
archivedLogFilenamePattern: something
archivedFileCount: 31
timeZone: UTC
logFormat: "myFormat"
- type: hostnameFile
currentLogFilename: something
threshold: ERROR
archive: true
archivedLogFilenamePattern: something
archivedFileCount: 31
timeZone: UTC
logFormat: "myFormat"
As you can see I am mapping the events appender to the events logger. This way all my events end up in file A, while the other information ends up somewhere else.
I hope this helps. Might not be the cleanest solution but I don't think Dropwizard allows this feature currently.