splunk-connect-for-syslog
splunk-connect-for-syslog copied to clipboard
Custom Sources/Unique Listening Ports don't apply vendor_product_by_source metadata.
Was the issue replicated by support? No
What is the sc4s version ? sc4s version=3.22.3
Is there a pcap available? No
Is the issue related to the environment of the customer or Software related issue? No
Is it related to Data loss, please explain ? Protocol? Hardware specs? No
Last chance index/Fallback index? No
Is the issue related to local customization? Yes
Do we have all the default indexes created? Yes
Describe the bug A clear and concise description of what the bug is. Have two unique listen ports defined in env_file:
SC4S_LISTEN_CISCO_ASA_515_TCP_PORT=515
SC4S_LISTEN_CISCO_ASA_TCP_PORT=516
Would like to use SC4S receive time instead of timestamp in event, so as a test set:
# vendor_product_by_source.conf
filter f_telus_syslog {
host(".*")
};
and
# vendor_product_by_source.csv
f_telus_syslog,sc4s_use_recv_time,"yes"
With default sources, parser(vendor_product_by_source) is called, so the sc4s_use_recv_time field is set and the timestamp send to Splunk Cloud is correct.
With the configuration above, I can't find a place where it is called in the config/code/log path, so timestamp never gets replaced.
If I add a custom parser, and call it, it works:
# more /opt/sc4s/local/config/app_parsers/syslog/app-telus_asa.conf
block parser telus_asa-parser() {
channel {
parser(vendor_product_by_source);
rewrite {
r_set_splunk_dest_default(
source("cisco:asa")
sourcetype('cisco:asa')
vendor("cisco")
product("asa")
template("t_msg_only")
);
set("$(lowercase ${PROGRAM})", value('HOST') condition("${PROGRAM}" ne ""));
};
};
};
application telus_asa[sc4s-network-source] {
filter {
tags(".source.s_CISCO_ASA_515")
};
parser { telus_asa-parser(); };
};
If I call it here it also works (but this is likely not the best place to do it):
# more /etc/syslog-ng/conf.d/conflib/_splunk/netsourcefields.conf
block parser p_set_netsource_fields(
vendor()
product()
) {
channel {
# parser(vendor_product_by_source);
rewrite {
set("`vendor`", value(".netsource.sc4s_vendor") condition('`vendor`' ne ""));
set("`product`", value(".netsource.sc4s_product") condition('`product`' ne ""));
set("`vendor`_`product`", value(".netsource.sc4s_vendor_product"));
set-tag("vps");
set-tag("ns_vendor:`vendor`");
set-tag("ns_product:`product`");
};
};
};
Any guidance would be appreciated!
Happy to submit a pull request if I can determine where the appropriate place to call this would be.
Ok, I think the issue is with the /etc/syslog-ng/conf.d/sources/source_syslog
jinja2 template:
{%- if not vendor or not product %}
{%- if use_vpscache == True %}
if {
parser(p_vpst_cache);
};
{%- endif %}
if {
parser(vendor_product_by_source);
};
**{%- endif %}**
The conditional checks for a missing vendor or product, which I'm guessing is meant to then trigger a VPS cache check. However, the endif encompasses the parser(vendor_product_by_source);
line as well, which has the effect of bypassing this code path altogether.
My guess is that the endif should be moved up, right after the previous endif:
{%- if not vendor or not product %}
{%- if use_vpscache == True %}
if {
parser(p_vpst_cache);
};
{%- endif %}
**{%- endif %}**
if {
parser(vendor_product_by_source);
};
Pull request submitted: https://github.com/splunk/splunk-connect-for-syslog/pull/2457
A sample how that can be implemented with modern method
block parser app-dest-new-cef() {
channel {
parser {
add-contextual-data(
selector(filters("`syslog-ng-sysconfdir`/conf.d/local/context/vendor_product_by_source.conf")),
database("`syslog-ng-sysconfdir`/conf.d/local/context/vendor_product_by_source.csv")
ignore-case(yes)
prefix(".netsource.")
);
};
};
};
application app-dest-new-cef[sc4s-postfilter] {
filter {
tags(".source.s_INFOBLOX_NIOS_THREAT");
};
parser {
app-dest-new-cef();
};
};