When starting logstash 2.0, I'm getting
# cat /var/log/logstash/logstash.err
log4j:WARN No appenders could be found for logger (org.apache.http.impl.conn.PoolingHttpClientConnectionManager).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Freshly installed from website
# rpm -qa | grep logstash
logstash-2.0.0-1.noarch
Looks to be the same issue as https://logstash.jira.com/browse/LOGSTASH-302
Similar to https://discuss.elastic.co/t/log4j-warn-no-appenders-could-be-found-for-logger-node/11907/8
Trying to see exactly how I'd include a log4j.properties file as in https://gist.github.com/jordansissel/1004979
@cptcanuck what configuration are you using?
(edit: formatting was horrible, sorry)
I'm running the default /etc/default/logstash thta is installed with the package:
#
Default settings for logstash
#
Override Java location
JAVACMD=/usr/bin/java
Set a home directory
LS_HOME=/var/lib/logstash
Arguments to pass to logstash agent
LS_OPTS=""
Arguments to pass to java
LS_JAVA_OPTS="-Djava.io.tmpdir=$HOME"
pidfiles aren't used for upstart; this is for sysv users.
LS_PIDFILE=/var/run/logstash.pid
user id to be invoked as; for upstart: edit /etc/init/logstash.conf
LS_USER=logstash
logstash logging
LS_LOG_FILE=/var/log/logstash/logstash.log
LS_USE_GC_LOGGING="true"
logstash configuration directory
LS_CONF_DIR=/etc/logstash/conf.d
Open file limit; cannot be overridden in upstart
LS_OPEN_FILES=16384
Nice level
LS_NICE=19
If this is set to 1, then when
stopis called, if the process hasnot exited within a reasonable time, SIGKILL will be sent next.
The default behavior is to simply log a message "program stop failed; still running"
KILL_ON_STOP_TIMEOUT=0
and the following config (cleaned up and obfuscated)
input {
kafka {
zk_connect => "zk1:2181,zk2:2181,zk3:2181"
topic_id => "testtopic"
codec => plain { charset => "ISO-8859-1" }
type => "testtopic"
decorate_events => "true"
}
}filter {
if [type] == "testtopic" {
grok {
match => { "message" => "._%{TIMESTAMP_ISO8601:datetime} %{HOSTNAME:sending_host}._%{HOSTNAME:device_name} %{DATA:vs_name}-c%{DATA:containment} c=%{IP:client_ip}/%{POSINT:client_port} v=%{IP:virtual_ip}/%{POSINT:virtual_port} s=%{IP:snat_ip}/%{POSINT:snat_port} n=%{IP:new_ip}/%{POSINT:new_port}" > }
}
geoip {
source => "client_ip"
target => "geoip"
database => "/etc/logstash/GeoLiteCity.dat"
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
}
mutate {
convert => [ "[geoip][coordinates]", "float"]
}
}
}output {
if [type] == "testtopic" {
elasticsearch {
hosts => ["localhost"]
}
}
}
This seems more like a bug in the Elasticsearch output (which I assume is what is invoking the apache http code?).
We should not have a log4j.properties. Plugins should be passing whatever internal loggers need whatever settings.
Annoyingly, I stood up another system, same install, same configuration (using puppet, so reliably the same), not getting this error.
I am running into the same error.
Tested three basic configuration containing elasticsearch ouput and log4j input...
Getting this log4j messages with configuration #3 only:
log4j:WARN No appenders could be found for logger (org.apache.http.impl.conn.PoolingHttpClientConnectionManager).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
config #1:
input {
stdin {
}
}
output {
stdout {
codec => rubydebug
}
if "elasticsearch" in [tags] {
elasticsearch {
hosts => [ "127.0.0.1:9200" ]
}
}
}
config #2:
input {
log4j {
}
stdin {
}
}
output {
stdout {
codec => rubydebug
}
}
config #3:
input {
log4j {
}
stdin {
}
}
output {
stdout {
codec => rubydebug
}
if "elasticsearch" in [tags] {
elasticsearch {
hosts => [ "127.0.0.1:9200" ]
}
}
}
Pretty sure that it is the core Manticore client used by the Elasticsearch output (http protocol) that uses the Apache PoolingHttpClientConnectionManager.
This looks like a potential class conflict issue related to log4j because simply having an ES output (http) does not generally reproduce the issue. But if you have an input that uses log4j, then the warning shows up even if you run a configtest:
./logstash -f test.conf --configtest
log4j:WARN No appenders could be found for logger (org.apache.http.impl.conn.PoolingHttpClientConnectionManager).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Configuration OK
Consider the following configurations:
This one _does not_ produce the warnings:
input {
kafka {
zk_connect => "some host"
group_id => "group"
topic_id => "topic"
}
#stdin{}
}
output {
# elasticsearch {
# hosts => ["some host"]
# }
stdout{}
}
This one also _does not_ produce the warnings:
input {
#kafka {
# zk_connect => "some host"
# group_id => "group"
# topic_id => "topic"
#}
stdin{}
}
output {
elasticsearch {
hosts => ["some host"]
}
#stdout{}
}
However, this one _does_ produce the warnings:
input {
kafka {
zk_connect => "some host"
group_id => "group"
topic_id => "topic"
}
#stdin{}
}
output {
elasticsearch {
hosts => ["some host"]
}
#stdout{}
}
+1
What is the status of this bug? I'm using Logstash 2.1.1 and the issue is still there...
OS: Gentoo Linux amd64
Java: 1.8.0.72
+1
Can still see the issue on 2.4.0. Java 8.
Same for me with logstash 2.4.0 and openjdk version "1.8.0_101" (centos 7.2)
Hi, same for me with the new Logstash 5.2.1 and JRE build 1.8.0_121-b13
Is there any solution for this?
thank you
Still happening on logstash 5.3.0
Also happening in logstash 5.3.2
I'm also experiencing this, though on logstash 5.2.2.
For folks dialing in with "me too"-style replies. Please include the message you are receiving from Logstash.
Also, it would be helpful for folks to detail the negative impact of this problem.
@jovanmal, @florian-asche, @sickyazone, @lifeofguenter ^^ see my previous comment. We need more details -- the specific messages you are seeing and a detail of the negative impact.
On Ubuntu 16.04 running logstash 5.4.0-1 I get the following in the logs after a restart. Not sure if there is a negative impact. In that I don't know how it should behave without this message.
root@logstash01:/etc/elasticsearch# service logstash restart; tail -f /var/log/syslog
May 15 22:17:07 logstash01 systemd[1]: Started Elasticsearch.
May 15 22:17:09 logstash01 kibana[15663]: {"type":"log","@timestamp":"2017-05-15T22:17:09Z","tags":["status","plugin:[email protected]","error"],"pid":15663,"state":"red","message":"Status changed from red to red - Unable to connect to Elasticsearch at http://localhost:9200.","prevState":"red","prevMsg":"No Living connections"}
May 15 22:17:19 logstash01 kibana[15663]: {"type":"log","@timestamp":"2017-05-15T22:17:19Z","tags":["status","plugin:[email protected]","error"],"pid":15663,"state":"red","message":"Status changed from red to red - Elasticsearch is still initializing the kibana index.","prevState":"red","prevMsg":"Unable to connect to Elasticsearch at http://localhost:9200."}
May 15 22:17:59 logstash01 systemd[1]: Stopping Kibana...
May 15 22:17:59 logstash01 systemd[1]: Stopped Kibana.
May 15 22:17:59 logstash01 systemd[1]: Started Kibana.
May 15 22:18:05 logstash01 kibana[23861]: {"type":"log","@timestamp":"2017-05-15T22:18:05Z","tags":["listening","info"],"pid":23861,"message":"Server running at http://localhost:5601"}
May 15 22:20:09 logstash01 systemd[1]: Stopping logstash...
May 15 22:20:15 logstash01 systemd[1]: Stopped logstash.
May 15 22:20:15 logstash01 systemd[1]: Started logstash.
May 15 22:20:26 logstash01 logstash[24090]: Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
May 15 22:20:28 logstash01 logstash[24090]: log4j:WARN No appenders could be found for logger (org.apache.http.client.protocol.RequestAuthCache).
May 15 22:20:28 logstash01 logstash[24090]: log4j:WARN Please initialize the log4j system properly.
May 15 22:20:28 logstash01 logstash[24090]: log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Ubuntu 16.04
Logstash 5.4.0 (upgraded from 2.3.4)
Elasticsearch and Kibana also 5.4.0
Same log error:
```May 24 00:40:16 cloud-dlogs logstash[30738]: Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
May 24 00:40:17 cloud-dlogs logstash[30738]: log4j:WARN No appenders could be found for logger (org.apache.http.client.protocol.RequestAuthCache).
May 24 00:40:17 cloud-dlogs logstash[30738]: log4j:WARN Please initialize the log4j system properly.
May 24 00:40:17 cloud-dlogs logstash[30738]: log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Impact: **logstash stops working after this issue. Nothing is coming into Elasticsearch database** (it was working in previous version)
```input {
beats {
port => "5044"
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
output {
elasticsearch {
hosts => localhost
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
Any further information that I can provide?
Edit: nevermind, I got it working. Steps I followed:
After that, everything worked as expected with a fresh install. I would hint some issue with the plugins or plugins migration, as using some configurations directly from the elastic page (i.e. echo configuration) were working.
Addressing only the title issue
Logstash: Log4j system not initialized properly
I am marking this as resolved with 5.6.0 and 6.0.0 via https://github.com/elastic/logstash/issues/7526
Note - the Warning is harmless and can be ignored.
Please open new ticket(s) if other issues are still present.
As of version 5.5.0 this has gotten worse, on 5.4.x I use to receive the warning in this issue. As of 5.5.0:
Jul 07 13:48:11 prd-log-003[30203]: ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.
Jul 07 13:48:12 prd-log-003[30203]: Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
Jul 07 13:48:13 prd-log-003[30203]: 2017-07-07 13:48:13,360 Api Webserver ERROR No log4j2 configuration file found. Using default configuration: logging only errors to the console.
Jul 07 13:48:13 prd-log-003 systemd[1]: logstash.service: main process exited, code=exited, status=1/FAILURE
Logstash seems to crash on this now?
@jakelandis I saw the same kind of crash as the poster above me this morning (only in 5.x not in master though). The way I could fix it (just randomly came across this) was to run the Gradle tests (did run the rake compile:all and it didn't help, only running the tests did).
We should probably make a new issue for this yes, but maybe you have a quick idea where the packaging could be going wrong? :)
@vandernorth - can you open a new issue for he crashing with any details that may help to reproduce it? We need to get to the bottom of that, but I am pretty confident it is unrelated to the log4j2 configuration log message.
@original-brownbear - Wierd! No idea why running tests would fix anything.
@jakelandis My money would be on either Gradle or more likely Rake missing some dependent task setting :)
Most helpful comment
On Ubuntu 16.04 running logstash 5.4.0-1 I get the following in the logs after a restart. Not sure if there is a negative impact. In that I don't know how it should behave without this message.
root@logstash01:/etc/elasticsearch# service logstash restart; tail -f /var/log/syslog
May 15 22:17:07 logstash01 systemd[1]: Started Elasticsearch.
May 15 22:17:09 logstash01 kibana[15663]: {"type":"log","@timestamp":"2017-05-15T22:17:09Z","tags":["status","plugin:[email protected]","error"],"pid":15663,"state":"red","message":"Status changed from red to red - Unable to connect to Elasticsearch at http://localhost:9200.","prevState":"red","prevMsg":"No Living connections"}
May 15 22:17:19 logstash01 kibana[15663]: {"type":"log","@timestamp":"2017-05-15T22:17:19Z","tags":["status","plugin:[email protected]","error"],"pid":15663,"state":"red","message":"Status changed from red to red - Elasticsearch is still initializing the kibana index.","prevState":"red","prevMsg":"Unable to connect to Elasticsearch at http://localhost:9200."}
May 15 22:17:59 logstash01 systemd[1]: Stopping Kibana...
May 15 22:17:59 logstash01 systemd[1]: Stopped Kibana.
May 15 22:17:59 logstash01 systemd[1]: Started Kibana.
May 15 22:18:05 logstash01 kibana[23861]: {"type":"log","@timestamp":"2017-05-15T22:18:05Z","tags":["listening","info"],"pid":23861,"message":"Server running at http://localhost:5601"}
May 15 22:20:09 logstash01 systemd[1]: Stopping logstash...
May 15 22:20:15 logstash01 systemd[1]: Stopped logstash.
May 15 22:20:15 logstash01 systemd[1]: Started logstash.
May 15 22:20:26 logstash01 logstash[24090]: Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
May 15 22:20:28 logstash01 logstash[24090]: log4j:WARN No appenders could be found for logger (org.apache.http.client.protocol.RequestAuthCache).
May 15 22:20:28 logstash01 logstash[24090]: log4j:WARN Please initialize the log4j system properly.
May 15 22:20:28 logstash01 logstash[24090]: log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.