问题描述
我目前通过使用以下命令通过我的Java应用程序来编程提交风暴拓扑:
Nimbus.Client client = NimbusClient.getConfiguredClient(stormConfigProvider.getStormConfig()).getClient(); client.submitTopology( this.topologyID.toString(), stormJarManager.getRemoteJarLocation(), JSONValue.toJSONString(stormConfigProvider.getStormConfig()), topology );
在我的情况下,我有两种拓扑.测试拓扑和生产拓扑.对于这两种拓扑,我都需要不同类型的记录.尽管测试拓扑以跟踪级别运行,但生产拓扑将以信息级别运行.此外,我要求生产拓扑配置了Splunk log4j2 appender,以集中我的生产应用程序的记录.
为此,我将一个log4j.xml文件包括在我的拓扑罐中,该文件配置了Splunk Appender.但是,服务器未兑现log4j.xml文件.相反,Storm Server似乎使用了自己的配置.
如何更改不同拓扑的log4j配置? (我不想修改每个工人的log4j.xml).
推荐答案
您可以使用 https://.
我不确定如何根据加载拓扑添加/删除Splunk Appender.您可能可以通过程序配置log4j, https://logging.apache. org/log4j/2.x/Manual/customconfig.html
Just for context, here's where Storm sets the system property that causes Log4j to load the worker log4j configuration https://github.com/apache/storm/blob/4137328b75c06771f84414c3c2113e2d1c757c08/storm-server/src/main/java/org/apache/storm/daemon/supervisor/basiccontainer.java#l560 .如果您想加载拓扑罐中包含的log4j2.xml,也许可以有条件地将该设置从设置为工人设置的系统属性中排除.我认为这需要更改代码,因此您需要在 https://essess上提出问题. apache.org/jira
问题描述
I'm currently submitting Storm topologies programatically via my Java application by using the following command:
Nimbus.Client client = NimbusClient.getConfiguredClient(stormConfigProvider.getStormConfig()).getClient(); client.submitTopology( this.topologyID.toString(), stormJarManager.getRemoteJarLocation(), JSONValue.toJSONString(stormConfigProvider.getStormConfig()), topology );
In my scenario, I have two kinds of topologies. Testing topologies and production topologies. For both kind of topologies, I require different types of logging. While the testing topologies run with TRACE level, the production topologies will run with INFO level. In addition, I require that the production topologies have a SPLUNK Log4J2 appender configured, to centralize the logging of my production application.
For that, I included a log4j.xml file into my topology JAR which configures the SPLUNK appender. However, the log4j.xml file is not honored by the Server. Instead, the Storm Server seems to use its own configuration.
How can I change my log4j configuration for different topologies? (I don't want to modify the log4j.xml on each worker).
推荐答案
You can use https://storm.apache.org/releases/current/dynamic-log-level-settings.html to set log levels for each topology.
I'm not sure how you'd add/remove the splunk appender based on the loaded topology. You might be able to configure log4j programatically https://logging.apache.org/log4j/2.x/manual/customconfig.html and set the log4j2.configurationFactory system property on your workers to point to your configuration factory (you can do this by adding it to the topology.worker.childopts property in your topology config).
Just for context, here's where Storm sets the system property that causes Log4j to load the worker log4j configuration https://github.com/apache/storm/blob/4137328b75c06771f84414c3c2113e2d1c757c08/storm-server/src/main/java/org/apache/storm/daemon/supervisor/BasicContainer.java#L560. If you wanted to load a log4j2.xml included in your topology jar, maybe it would be possible to conditionally exclude that setting from the system properties set for workers. I think it would require a code change though, so you'd need to raise an issue on https://issues.apache.org/jira