2

rsyslog server template consideration for multiple remote hosts ---> link to previously answered question

@ meuh, I find this post very useful as am currently working on this configuration.

I have done the steps which are mentioned above and it's working fine.

I now have an ELK setup where rsyslog forwards the logs to it.

My templates are:

$template
templmesg,"/data01/RemoteLogs/DLF/%$YEAR%/%$MONTH%/%HOSTNAME%/%HOSTNAME%-%$DAY%-%$MONTH%-%$YEAR%.log"

$template mylogsec,"/data01/RemoteLogs/Logserver/%$YEAR%/%$MONTH%/%HOSTNAME%/%HOSTNAME%-%$DAY%-%$MONTH%-%$YEAR%.log"

if $fromhost startswith "10.100.10" then ?templmesg
 & stop
 if $fromhost startswith "10.100.112" then ?mylogsec
 & stop

so I have two locations were logs are stored.

Because of multiple locations of logs storages like DLF and Logserver. Kibana from (ELK) does not show logs which is received from rsyslog. It only reads from one location of logs that is from DLF/ dir and not from Logserver.

Now I am stuck and don't know how to forward rsyslog logs from multiple locations to ELK and make it show in kibana; or, is there any specific configuration in rsyslog that I need to work out?

Below is the rsyslog configuration file:

# For more information see /usr/share/doc/rsyslog-*/rsyslog_conf.html
# If you experience problems, see http://www.rsyslog.com/doc/troubleshoot.html

#### MODULES ####

# The imjournal module bellow is now used as a message source instead of imuxsock.
$ModLoad imuxsock # provides support for local system logging (e.g. via logger command)
$ModLoad imjournal # provides access to the systemd journal
$ModLoad imklog # reads kernel messages (the same are read from journald)
#$ModLoad immark  # provides --MARK-- message capability

# Provides UDP syslog reception
$ModLoad imudp
$UDPServerRun 514

# Provides TCP syslog reception
$ModLoad imtcp
$InputTCPServerRun 514


#### GLOBAL DIRECTIVES ####

# Where to place auxiliary files
$WorkDirectory /var/lib/rsyslog

# Use default timestamp format
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat

# File syncing capability is disabled by default. This feature is usually not required,
# not useful and an extreme performance hit
#$ActionFileEnableSync on

# Include all config files in /etc/rsyslog.d/
$IncludeConfig /etc/rsyslog.d/*.conf

# Turn off message reception via local log socket;
# local messages are retrieved through imjournal now.
$OmitLocalLogging on

# File to store the position in the journal
$IMJournalStateFile imjournal.state


#### RULES ####

# Log all kernel messages to the console.
# Logging much else clutters up the screen.
#kern.*                                                 /dev/console

$template templmesg,"/data01/RemoteLogs/DLF/%$YEAR%/%$MONTH%/%HOSTNAME%/%HOSTNAME%-%$DAY%-%$MONTH%-%$YEAR%.log"
$template mylogsec,"/data01/RemoteLogs/DLF/Logserver/%$YEAR%/%$MONTH%/%HOSTNAME%/%HOSTNAME%-%$DAY%-%$MONTH%-%$YEAR%.log"

#if $fromhost startswith "10.100.10" then ?templmesg 
#& stop
if $fromhost startswith "10.100.112" then ?mylogsec 
& stop

local0.*                                                        ?templmesg
local1.*                                                        ?templmesg
local2.*                                                        ?templmesg
local3.*                                                        ?templmesg
local4.*                                                        ?templmesg
local5.*                                                        ?templmesg
local6.*                                                        ?templmesg


template(name="json-template"
  type="list") {
   constant(value="{")
      constant(value="\"@timestamp\":\"")     property(name="timereported" dateFormat="rfc3339")
      constant(value="\",\"@version\":\"1")
      constant(value="\",\"message\":\"")     property(name="msg" format="json")
      constant(value="\",\"sysloghost\":\"")  property(name="hostname")
      constant(value="\",\"severity\":\"")    property(name="syslogseverity-text")
      constant(value="\",\"facility\":\"")    property(name="syslogfacility-text")
      constant(value="\",\"programname\":\"") property(name="programname")
      constant(value="\",\"procid\":\"")      property(name="procid")
    constant(value="\"}\n")
}


# Log anything (except mail) of level info or higher.
# Don't log private authentication messages!
#
#$createDirs on

*.info;mail.none;authpriv.none;cron.none;local0.none;local1.none;local2.none;local3.none;local4.none;local5.none;local6.none              ?templmesg

# The authpriv file has restricted access.
authpriv.*                                              ?templmesg

# Log all the mail messages in one place.
mail.*                                                  ?templmesg


# Log cron stuff
cron.*                                                  ?templmesg

# Everybody gets emergency messages
#*.emerg                                                 :omusrmsg:*

# Save news errors of level crit and higher in a special file.
uucp,news.crit                                          /var/log/spooler

# Save boot messages also to boot.log
local7.*                                                ?templmesg


# ### begin forwarding rule ###
# The statement between the begin ... end define a SINGLE forwarding
# rule. They belong together, do NOT split them. If you create multiple
# forwarding rules, duplicate the whole block!
# Remote Logging (we use TCP for reliable delivery)
#
# An on-disk queue is created for this action. If the remote host is
# down, messages are spooled to disk and sent when it is up again.
#$ActionQueueFileName fwdRule1 # unique name prefix for spool files
#$ActionQueueMaxDiskSpace 1g   # 1gb space limit (use as much as possible)
#$ActionQueueSaveOnShutdown on # save messages to disk on shutdown
#$ActionQueueType LinkedList   # run asynchronously
#$ActionResumeRetryCount -1    # infinite retries if host is down
# remote host is: name/ip:port, e.g. 192.168.0.1:514, port optional
*.* @10.100.10.30:10514;json-template
# ### end of the forwarding rule ###
Jeff Schaller
  • 66,199
  • 35
  • 114
  • 250
viggy9816
  • 23
  • 1
  • 5
  • Welcome to the U&L SE. How are you sending the data from the log file to Elasticsearch? Do you have Filebeat configured? – Haxiel Jan 16 '19 at 13:49
  • No Filebeat configured, it is just using UDP ports that is configured to listen and forward remote logs, if you require i can give you the configuration of rsyslog, elasticsearch,logstash and kibana – viggy9816 Jan 17 '19 at 06:20
  • Okay, are the forwarding rules written before the configuration snippet you have shown above? – Haxiel Jan 17 '19 at 06:31
  • yes, the forwarding rules are written before the configuration! – viggy9816 Jan 17 '19 at 06:44
  • Can you verify the steps you have completed against [this blog post](https://www.elastic.co/blog/how-to-centralize-logs-with-rsyslog-logstash-and-elasticsearch-on-ubuntu-14-04)? It's written for Ubuntu, but the flow of messages should work the same way. – Haxiel Jan 17 '19 at 06:52
  • 1.Set up a single, client (or forwarding) rsyslog server - done 2.Set up a single, server (or collecting) rsyslog server, to receive logs from the rsyslog client - done 3.Set up a Logstash instance to receive the messages from the rsyslog collecting server - done 4.Set up an Elasticsearch server to receive the data from Logstash - done – viggy9816 Jan 17 '19 at 07:08
  • Great. Can you provide the rsyslog.conf file from the collecting rsyslog server? Please edit the question and add it there. – Haxiel Jan 17 '19 at 07:19
  • i have provided it – viggy9816 Jan 17 '19 at 13:56
  • Got it. For testing, can you remove the '& stop' line and then check if it works? I believe the message is dropped at that point of processing, which means it never gets to the forwarding rule. This might clutter up your other log files for a short while, but try it out if you can. – Haxiel Jan 17 '19 at 17:42
  • Yes, correct!. but if you actually go through the link which i have mentioned in my question about the objective and if i remove the '&stop' my actual objective will not be achieved and the logs received from different servers will be stored in the same folder DLF/, and as well as if you have 2 folders, all the logs received will be duplicated in the two folders. that is why i have two folders DLF/ and Logserver/ were the logs which comes under "10.100.10.x" network goes to the folder DLF/ and the logs which comes under "10.100.112.x" network goes to the folder Logserver/. – viggy9816 Jan 18 '19 at 10:10
  • so as per the conf file. i can receive the logs in segregated manner according to the template. but after that, the logs from DLF/ is only forwarded to the ELK and not the Logserver/. i require solution for this. hope you understood the scenario!, Thanks! – viggy9816 Jan 18 '19 at 10:12

1 Answers1

2

Since the rsyslog.conf configuration file is parsed from top to bottom, the actions are carried out in sequence for each message in the same order as they are defined in the file. What happens in your case is that the messages matching the $fromhost startswith "10.100.112" test are processed (i.e. written to the log files specified by the 'mylogsec' template) and then discarded by the stop statement.

The solution to this problem is straightforward. Before the message is dropped by rsyslog, you have to forward it to the remote Logstash server. You can modify your filter as shown below:

if $fromhost startswith "10.100.112" then ?mylogsec 
& @10.100.10.30:10514;json-template
& stop

Because you're using the JSON template for forwarding here, you will also need to move the definition of that template before the filter expression. So the final structure will be the following:

$template templmesg...
$template mylogsec...

template(name="json-template"...

if $fromhost startswith "10.100.112" then ?mylogsec 
& @10.100.10.30:10514;json-template
& stop

Once done, restart your rsyslog daemon for the changes to take effect.

Haxiel
  • 8,201
  • 1
  • 20
  • 30
  • That's Bulls eye! spot on! Thanks @Haxiel. I have re configured accordingly and it is working perfectly alright. Very helpful indeed. Feeling relieved. – viggy9816 Jan 22 '19 at 07:50
  • @vignesh9816 That's great :-). If my answer has completely resolved your problem, please do mark it as 'accepted' by clicking on the tick mark next to it. This grants reputation to both of us . – Haxiel Jan 22 '19 at 08:00