bro-ids logstash filter not working -
i've set elk stack on centos 7, , forwarding logs freebsd 11 host runs bro. filters not working correctly parse bro logs.
this current set up:
freebsd filebeat client
filebeat.yml
filebeat: registry_file: /var/run/.filebeat prospectors: - paths: - /var/log/messages - /var/log/maillog - /var/log/auth.log - /var/log/cron - /var/log/debug.log - /var/log/devd.log - /var/log/ppp.log - /var/log/netatalk.log - /var/log/setuid.today - /var/log/utx.log - /var/log/rkhunter.log - /var/log/userlog - /var/log/sendmail.st - /var/log/xferlog input_type: log document_type: syslog - paths: - /var/log/bro/current/app_stats.log input_type: log document_type: bro_app_stats - paths: - /var/log/bro/current/communication.log input_type: log document_type: bro_communication - paths: - /var/log/bro/current/conn.log input_type: log document_type: bro_conn - paths: - /var/log/bro/current/dhcp.log input_type: log document_type: bro_dhcp - paths: - /var/log/bro/current/dns.log input_type: log document_type: bro_dns - paths: - /var/log/bro/current/dpd.log input_type: log document_type: bro_dpd - paths: - /var/log/bro/current/files.log input_type: log document_type: bro_files - paths: - /var/log/bro/current/ftp.log input_type: log document_type: bro_ftp - paths: - /var/log/bro/current/http.log input_type: log document_type: bro_http - paths: - /var/log/bro/current/known_certs.log input_type: log document_type: bro_app_known_certs - paths: - /var/log/bro/current/known_hosts.log input_type: log document_type: bro_known_hosts - paths: - /var/log/bro/current/known_services.log input_type: log document_type: bro_known_services - paths: - /var/log/bro/current/notice.log input_type: log document_type: bro_notice - paths: - /var/log/bro/current/smtp.log input_type: log document_type: bro_smtp - paths: - /var/log/bro/current/software.log input_type: log document_type: bro_software - paths: - /var/log/bro/current/ssh.log input_type: log document_type: bro_ssh - paths: - /var/log/bro/current/ssl.log input_type: log document_type: bro_ssl - paths: - /var/log/bro/current/weird.log input_type: log document_type: bro_weird - paths: - /var/log/bro/current/x509.log input_type: log document_type: bro_x509
then on centos elk server have 4 configs:
/etc/logstash/conf.d/02-beats-input.conf
input { beats { port => 5044 } }
/etc/logstash/conf.d/10-syslog-filter.conf
filter { if [type] == "syslog" { grok { match => { "message" => "%{syslogtimestamp:syslog_timestamp} %{sysloghost:syslog_hostname} %{data:syslog_program}(?:\[%{posint:syslog_pid}\])?: %{greedydata:syslog_message}" } add_field => [ "received_at", "%{@timestamp}" ] add_field => [ "received_from", "%{host}" ] } syslog_pri { } date { match => [ "syslog_timestamp", "mmm d hh:mm:ss", "mmm dd hh:mm:ss" ] } } }
/etc/logstash/conf.d/20-bro-ids-filter.conf
filter { # bro_app_stats ###################### if [type] == "bro_app" { grok { match => [ "message", "(?<ts>(.*?))\t(?<ts_delta>(.*?))\t(?<app>(.*?))\t(?<uniq_hosts>(.*?))\t(?<hits>(.*?))\t(?<bytes>(.*))" ] } } # bro_conn ###################### if [type] == "bro_conn" { grok { match => [ "message", "(?<ts>(.*?))\t(?<uid>(.*?))\t(?<id.orig_h>(.*?))\t(?<id.orig_p>(.*?))\t(?<id.resp_h>(.*?))\t(?<id.resp_p>(.*?))\t(?<proto>(.*?))\t(?<service>(.*?))\t(?<duration>(.*?))\t(?<orig_bytes>(.*?))\t(?<resp_bytes>(.*?))\t(?<conn_state>(.*?))\t(?<local_orig>(.*?))\t(?<missed_bytes>(.*?))\t(?<history>(.*?))\t(?<orig_pkts>(.*?))\t(?<orig_ip_bytes>(.*?))\t(?<resp_pkts>(.*?))\t(?<resp_ip_bytes>(.*?))\t(?<tunnel_parents>(.*?))\t(?<orig_cc>(.*?))\t(?<resp_cc>(.*?))\t(?<sensorname>(.*))", "message", "(?<ts>(.*?))\t(?<uid>(.*?))\t(?<id.orig_h>(.*?))\t(?<id.orig_p>(.*?))\t(?<id.resp_h>(.*?))\t(?<id.resp_p>(.*?))\t(?<proto>(.*?))\t(?<service>(.*?))\t(?<duration>(.*?))\t(?<orig_bytes>(.*?))\t(?<resp_bytes>(.*?))\t(?<conn_state>(.*?))\t(?<local_orig>(.*?))\t(?<missed_bytes>(.*?))\t(?<history>(.*?))\t(?<orig_pkts>(.*?))\t(?<orig_ip_bytes>(.*?))\t(?<resp_pkts>(.*?))\t(?<resp_ip_bytes>(.*?))\t(%{notspace:tunnel_parents})" ] } } # bro_notice ###################### if [type] == "bro_notice" { grok { match => [ "message", "(?<ts>(.*?))\t(?<uid>(.*?))\t(?<id.orig_h>(.*?))\t(?<id.orig_p>(.*?))\t(?<id.resp_h>(.*?))\t(?<id.resp_p>(.*?))\t(?<fuid>(.*?))\t(?<file_mime_type>(.*?))\t(?<file_desc>(.*?))\t(?<proto>(.*?))\t(?<note>(.*?))\t(?<msg>(.*?))\t(?<sub>(.*?))\t(?<src>(.*?))\t(?<dst>(.*?))\t(?<p>(.*?))\t(?<n>(.*?))\t(?<peer_descr>(.*?))\t(?<actions>(.*?))\t(?<suppress_for>(.*?))\t(?<dropped>(.*?))\t(?<remote_location.country_code>(.*?))\t(?<remote_location.region>(.*?))\t(?<remote_location.city>(.*?))\t(?<remote_location.latitude>(.*?))\t(?<remote_location.longitude>(.*))" ] } } # bro_dhcp ###################### if [type] == "bro_dhcp" { grok { match => [ "message", "(?<ts>(.*?))\t(?<uid>(.*?))\t(?<id.orig_h>(.*?))\t(?<id.orig_p>(.*?))\t(?<id.resp_h>(.*?))\t(?<id.resp_p>(.*?))\t(?<mac>(.*?))\t(?<assigned_ip>(.*?))\t(?<lease_time>(.*?))\t(?<trans_id>(.*))" ] } } # bro_dns ###################### if [type] == "bro_dns" { grok { match => [ "message", "(?<ts>(.*?))\t(?<uid>(.*?))\t(?<id.orig_h>(.*?))\t(?<id.orig_p>(.*?))\t(?<id.resp_h>(.*?))\t(?<id.resp_p>(.*?))\t(?<proto>(.*?))\t(?<trans_id>(.*?))\t(?<query>(.*?))\t(?<qclass>(.*?))\t(?<qclass_name>(.*?))\t(?<qtype>(.*?))\t(?<qtype_name>(.*?))\t(?<rcode>(.*?))\t(?<rcode_name>(.*?))\t(?<aa>(.*?))\t(?<tc>(.*?))\t(?<rd>(.*?))\t(?<ra>(.*?))\t(?<z>(.*?))\t(?<answers>(.*?))\t(?<ttls>(.*?))\t(?<rejected>(.*))" ] } } # bro_software ###################### if [type] == "bro_software" { grok { match => [ "message", "(?<ts>(.*?))\t(?<bro_host>(.*?))\t(?<host_p>(.*?))\t(?<software_type>(.*?))\t(?<name>(.*?))\t(?<version.major>(.*?))\t(?<version.minor>(.*?))\t(?<version.minor2>(.*?))\t(?<version.minor3>(.*?))\t(?<version.addl>(.*?))\t(?<unparsed_version>(.*))" ] } } # bro_dpd ###################### if [type] == "bro_dpd" { grok { match => [ "message", "(?<ts>(.*?))\t(?<uid>(.*?))\t(?<id.orig_h>(.*?))\t(?<id.orig_p>(.*?))\t(?<id.resp_h>(.*?))\t(?<id.resp_p>(.*?))\t(?<proto>(.*?))\t(?<analyzer>(.*?))\t(?<failure_reason>(.*))" ] } } # bro_files ###################### if [type] == "bro_files" { grok { match => [ "message", "(?<ts>(.*?))\t(?<fuid>(.*?))\t(?<tx_hosts>(.*?))\t(?<rx_hosts>(.*?))\t(?<conn_uids>(.*?))\t(?<source>(.*?))\t(?<depth>(.*?))\t(?<analyzers>(.*?))\t(?<mime_type>(.*?))\t(?<filename>(.*?))\t(?<duration>(.*?))\t(?<local_orig>(.*?))\t(?<is_orig>(.*?))\t(?<seen_bytes>(.*?))\t(?<total_bytes>(.*?))\t(?<missing_bytes>(.*?))\t(?<overflow_bytes>(.*?))\t(?<timedout>(.*?))\t(?<parent_fuid>(.*?))\t(?<md5>(.*?))\t(?<sha1>(.*?))\t(?<sha256>(.*?))\t(?<extracted>(.*))" ] } } # bro_http ###################### if [type] == "bro_http" { grok { match => [ "message", "(?<ts>(.*?))\t(?<uid>(.*?))\t(?<id.orig_h>(.*?))\t(?<id.orig_p>(.*?))\t(?<id.resp_h>(.*?))\t(?<id.resp_p>(.*?))\t(?<trans_depth>(.*?))\t(?<method>(.*?))\t(?<bro_host>(.*?))\t(?<uri>(.*?))\t(?<referrer>(.*?))\t(?<user_agent>(.*?))\t(?<request_body_len>(.*?))\t(?<response_body_len>(.*?))\t(?<status_code>(.*?))\t(?<status_msg>(.*?))\t(?<info_code>(.*?))\t(?<info_msg>(.*?))\t(?<filename>(.*?))\t(?<http_tags>(.*?))\t(?<username>(.*?))\t(?<password>(.*?))\t(?<proxied>(.*?))\t(?<orig_fuids>(.*?))\t(?<orig_mime_types>(.*?))\t(?<resp_fuids>(.*?))\t(?<resp_mime_types>(.*))" ] } } # bro_known_certs ###################### if [type] == "bro_known_certs" { grok { match => [ "message", "(?<ts>(.*?))\t(?<bro_host>(.*?))\t(?<port_num>(.*?))\t(?<subject>(.*?))\t(?<issuer_subject>(.*?))\t(?<serial>(.*))" ] } } # bro_known_hosts ###################### if [type] == "bro_known_hosts" { grok { match => [ "message", "(?<ts>(.*?))\t(?<bro_host>(.*))" ] } } # bro_known_services ###################### if [type] == "bro_known_services" { grok { match => [ "message", "(?<ts>(.*?))\t(?<bro_host>(.*?))\t(?<port_num>(.*?))\t(?<port_proto>(.*?))\t(?<service>(.*))" ] } } # bro_ssh ###################### if [type] == "bro_ssh" { grok { match => [ "message", "(?<ts>(.*?))\t(?<uid>(.*?))\t(?<id.orig_h>(.*?))\t(?<id.orig_p>(.*?))\t(?<id.resp_h>(.*?))\t(?<id.resp_p>(.*?))\t(?<status>(.*?))\t(?<direction>(.*?))\t(?<client>(.*?))\t(?<server>(.*?))\t(?<remote_location.country_code>(.*?))\t(?<remote_location.region>(.*?))\t(?<remote_location.city>(.*?))\t(?<remote_location.latitude>(.*?))\t(?<remote_location.longitude>(.*))" ] } } # bro_ssl ###################### if [type] == "bro_ssl" { grok { match => [ "message", "(?<ts>(.*?))\t(?<uid>(.*?))\t(?<id.orig_h>(.*?))\t(?<id.orig_p>(.*?))\t(?<id.resp_h>(.*?))\t(?<id.resp_p>(.*?))\t(?<version>(.*?))\t(?<cipher>(.*?))\t(?<server_name>(.*?))\t(?<session_id>(.*?))\t(?<subject>(.*?))\t(?<issuer_subject>(.*?))\t(?<not_valid_before>(.*?))\t(?<not_valid_after>(.*?))\t(?<last_alert>(.*?))\t(?<client_subject>(.*?))\t(?<client_issuer_subject>(.*?))\t(?<cert_hash>(.*?))\t(?<validation_status>(.*))" ] } } # bro_weird ###################### if [type] == "bro_weird" { grok { match => [ "message", "(?<ts>(.*?))\t(?<uid>(.*?))\t(?<id.orig_h>(.*?))\t(?<id.orig_p>(.*?))\t(?<id.resp_h>(.*?))\t(?<id.resp_p>(.*?))\t(?<name>(.*?))\t(?<addl>(.*?))\t(?<notice>(.*?))\t(?<peer>(.*))" ] } } # bro_x509 ####################### if [type] == "bro_x509" { csv { #x509.log:#fields ts id certificate.version certificate.serial certificate.subject certificate.issuer certificate.not_valid_before certificate.not_valid_after certificate.key_alg certificate.sig_alg certificate.key_type certificate.key_length certificate.exponent certificate.curve san.dns san.uri san.email san.ip basic_constraints.ca basic_constraints.path_len columns => ["ts","id","certificate.version","certificate.serial","certificate.subject","icertificate.issuer","certificate.not_valid_before","certificate.not_valid_after","certificate.key_alg","certificate.sig_alg","certificate.key_type","certificate.key_length","certificate.exponent","certificate.curve","san.dns","san.uri","san.email","san.ip","basic_constraints.ca","basic_constraints.path_len"] #if use custom delimiter, change following value in between quotes delimiter. otherwise, leave next line alone. separator => " " } #let's convert our timestamp 'ts' field, can use kibana features natively date { match => [ "ts", "unix" ] } } if [type]== "bro_intel" { grok { match => [ "message", "(?<ts>(.*?))\t%{data:uid}\t(?<id.orig_h>(.*?))\t(?<id.orig_p>(.*?))\t(?<id.resp_h>(.*?))\t(?<id.resp_p>(.*?))\t%{data:fuid}\t%{data:file_mime_type}\t%{data:file_desc}\t(?<seen.indicator>(.*?))\t(?<seen.indicator_type>(.*?))\t(?<seen.where>(.*?))\t%{notspace:sources}" ] } } } date { match => [ "ts", "unix" ] } } filter { if "bro" in [type] { if [id.orig_h] { mutate { add_field => [ "senderbase_lookup", "http://www.senderbase.org/lookup/?search_string=%{id.orig_h}" ] add_field => [ "cbl_lookup", "http://cbl.abuseat.org/lookup.cgi?ip=%{id.orig_h}" ] add_field => [ "spamhaus_lookup", "http://www.spamhaus.org/query/bl?ip=%{id.orig_h}" ] } } mutate { add_tag => [ "bro" ] } mutate { convert => [ "id.orig_p", "integer" ] convert => [ "id.resp_p", "integer" ] convert => [ "orig_bytes", "integer" ] convert => [ "resp_bytes", "integer" ] convert => [ "missed_bytes", "integer" ] convert => [ "orig_pkts", "integer" ] convert => [ "orig_ip_bytes", "integer" ] convert => [ "resp_pkts", "integer" ] convert => [ "resp_ip_bytes", "integer" ] } } } filter { if [type] == "bro_conn" { #the following makes use of translate filter (stash contrib) convert conn_state human text. saves having values packet introspection translate { field => "conn_state" destination => "conn_state_full" dictionary => [ "s0", "connection attempt seen, no reply", "s1", "connection established, not terminated", "s2", "connection established , close attempt originator seen (but no reply responder)", "s3", "connection established , close attempt responder seen (but no reply originator)", "sf", "normal syn/fin completion", "rej", "connection attempt rejected", "rsto", "connection established, originator aborted (sent rst)", "rstr", "established, responder aborted", "rstos0", "originator sent syn followed rst, never saw syn-ack responder", "rstrh", "responder sent syn ack followed rst, never saw syn (purported) originator", "sh", "originator sent syn followed fin, never saw syn ack responder (hence connection 'half' open)", "shr", "responder sent syn ack followed fin, never saw syn originator", "oth", "no syn seen, midstream traffic (a 'partial connection' not later closed)" ] } } } # resolve @source_host fqdn if possible if missing types of ging using source_host_ip above filter { if [id.orig_h] { if ![id.orig_h-resolved] { mutate { add_field => [ "id.orig_h-resolved", "%{id.orig_h}" ] } dns { reverse => [ "id.orig_h-resolved" ] action => "replace" } } } } filter { if [id.resp_h] { if ![id.resp_h-resolved] { mutate { add_field => [ "id.resp_h-resolved", "%{id.resp_h}" ] } dns { reverse => [ "id.resp_h-resolved" ] action => "replace" } } } }
and /etc/logstash/conf.d/30-elasticsearch-output.conf
output { elasticsearch { hosts => ["localhost:9200"] manage_template => false index => "%{[@metadata][beat]}-%{+yyyy.mm.dd}" document_type => "%{[@metadata][type]}" } }
i've leveraged gist , tailored configuration. while running following error in /var/log/logstash/logstash-plain.log:
[2016-11-06t15:30:36,961][error][logstash.agent ] ########\n\t if [type] == \"bro_dhcp\" {\n\t\tgrok { \n\t\t match => [ \"message\", \"(?<ts>(.*?))\\t(?<uid>(.*?))\\t(?<id.orig_h>(.*?))\\t(?<id.orig_p>(.*?))\\t(?<id.resp_h>(.*?))\\t(?<id.resp_p>(.*?))\\t(?<mac>(.*?))\\t(?<assigned_ip>(.*?))\\t(?<lease_time>(.*?))\\t(?<trans_id>(.*))\" ]\n\t\t}\n\t }\n\n\t# bro_dns ######################\n\t if [type] == \"bro_dns\" {\n\t\tgrok {\n\t\t match => [ \"message\", \"(?<ts>(.*?))\\t(?<uid>(.*?))\\t(?<id.orig_h>(.*?))\\t(?<id.orig_p>(.*?))\\t(?<id.resp_h>(.*?))\\t(?<id.resp_p>(.*?))\\t(?<proto>(.*?))\\t(?<trans_id>(.*?))\\t(?<query>(.*?))\\t(?<qclass>(.*?))\\t(?<qclass_name>(.*?))\\t(?<qtype>(.*?))\\t(?<qtype_name>(.*?))\\t(?<rcode>(.*?))\\t(?<rcode_name>(.*?))\\t(?<aa>(.*?))\\t(?<tc>(.*?))\\t(?<rd>(.*?))\\t(?<ra>(.*?))\\t(?<z>(.*?))\\t(?<answers>(.*?))\\t(?<ttls>(.*?))\\t(?<rejected>(.*))\" ]\n\t\t}\n\t }\n\n\t# bro_software ######################\n\t if [type] == \"bro_software\" {\n\t\tgrok { \n\t\t match => [ \"message\", \"(?<ts>(.*?))\\t(?<bro_host>(.*?))\\t(?<host_p>(.*?))\\t(?<software_type>(.*?))\\t(?<name>(.*?))\\t(?<version.major>(.*?))\\t(?<version.minor>(.*?))\\t(?<version.minor2>(.*?))\\t(?<version.minor3>(.*?))\\t(?<version.addl>(.*?))\\t(?<unparsed_version>(.*))\" ]\n\t\t}\n\t }\n\n\t# bro_dpd ######################\n\t if [type] == \"bro_dpd\" {\n\t\tgrok {\n\t\t match => [ \"message\", \"(?<ts>(.*?))\\t(?<uid>(.*?))\\t(?<id.orig_h>(.*?))\\t(?<id.orig_p>(.*?))\\t(?<id.resp_h>(.*?))\\t(?<id.resp_p>(.*?))\\t(?<proto>(.*?))\\t(?<analyzer>(.*?))\\t(?<failure_reason>(.*))\" ]\n\t\t}\n\t }\n\n\t# bro_files ######################\n\t if [type] == \"bro_files\" {\n\t\tgrok {\n\t\t match => [ \"message\", \"(?<ts>(.*?))\\t(?<fuid>(.*?))\\t(?<tx_hosts>(.*?))\\t(?<rx_hosts>(.*?))\\t(?<conn_uids>(.*?))\\t(?<source>(.*?))\\t(?<depth>(.*?))\\t(?<analyzers>(.*?))\\t(?<mime_type>(.*?))\\t(?<filename>(.*?))\\t(?<duration>(.*?))\\t(?<local_orig>(.*?))\\t(?<is_orig>(.*?))\\t(?<seen_bytes>(.*?))\\t(?<total_bytes>(.*?))\\t(?<missing_bytes>(.*?))\\t(?<overflow_bytes>(.*?))\\t(?<timedout>(.*?))\\t(?<parent_fuid>(.*?))\\t(?<md5>(.*?))\\t(?<sha1>(.*?))\\t(?<sha256>(.*?))\\t(?<extracted>(.*))\" ]\n\t\t}\n\t }\n\n\t# bro_http ######################\n\t if [type] == \"bro_http\" {\n\t\tgrok {\n\t\t match => [ \"message\", \"(?<ts>(.*?))\\t(?<uid>(.*?))\\t(?<id.orig_h>(.*?))\\t(?<id.orig_p>(.*?))\\t(?<id.resp_h>(.*?))\\t(?<id.resp_p>(.*?))\\t(?<trans_depth>(.*?))\\t(?<method>(.*?))\\t(?<bro_host>(.*?))\\t(?<uri>(.*?))\\t(?<referrer>(.*?))\\t(?<user_agent>(.*?))\\t(?<request_body_len>(.*?))\\t(?<response_body_len>(.*?))\\t(?<status_code>(.*?))\\t(?<status_msg>(.*?))\\t(?<info_code>(.*?))\\t(?<info_msg>(.*?))\\t(?<filename>(.*?))\\t(?<http_tags>(.*?))\\t(?<username>(.*?))\\t(?<password>(.*?))\\t(?<proxied>(.*?))\\t(?<orig_fuids>(.*?))\\t(?<orig_mime_types>(.*?))\\t(?<resp_fuids>(.*?))\\t(?<resp_mime_types>(.*))\" ]\n\t\t}\n\t }\n\n\t# bro_known_certs ######################\n\t if [type] == \"bro_known_certs\" {\n\t\tgrok {\n\t\t match => [ \"message\", \"(?<ts>(.*?))\\t(?<bro_host>(.*?))\\t(?<port_num>(.*?))\\t(?<subject>(.*?))\\t(?<issuer_subject>(.*?))\\t(?<serial>(.*))\" ]\n\t\t}\n\t }\n\n\t# bro_known_hosts ######################\n\t if [type] == \"bro_known_hosts\" {\n\t\tgrok {\n\t\t match => [ \"message\", \"(?<ts>(.*?))\\t(?<bro_host>(.*))\" ]\n\t\t}\n\t }\n\n\t# bro_known_services ######################\n\t if [type] == \"bro_known_services\" {\n\t\tgrok {\n\t\t match => [ \"message\", \"(?<ts>(.*?))\\t(?<bro_host>(.*?))\\t(?<port_num>(.*?))\\t(?<port_proto>(.*?))\\t(?<service>(.*))\" ]\n\t\t}\n\t }\n\n\t# bro_ssh ######################\n\t if [type] == \"bro_ssh\" {\n\t\tgrok {\n\t\t match => [ \"message\", \"(?<ts>(.*?))\\t(?<uid>(.*?))\\t(?<id.orig_h>(.*?))\\t(?<id.orig_p>(.*?))\\t(?<id.resp_h>(.*?))\\t(?<id.resp_p>(.*?))\\t(?<status>(.*?))\\t(?<direction>(.*?))\\t(?<client>(.*?))\\t(?<server>(.*?))\\t(?<remote_location.country_code>(.*?))\\t(?<remote_location.region>(.*?))\\t(?<remote_location.city>(.*?))\\t(?<remote_location.latitude>(.*?))\\t(?<remote_location.longitude>(.*))\" ]\n\t\t}\n\t }\n\n\t# bro_ssl ######################\n\t if [type] == \"bro_ssl\" {\n\t\tgrok {\n\t\t match => [ \"message\", \"(?<ts>(.*?))\\t(?<uid>(.*?))\\t(?<id.orig_h>(.*?))\\t(?<id.orig_p>(.*?))\\t(?<id.resp_h>(.*?))\\t(?<id.resp_p>(.*?))\\t(?<version>(.*?))\\t(?<cipher>(.*?))\\t(?<server_name>(.*?))\\t(?<session_id>(.*?))\\t(?<subject>(.*?))\\t(?<issuer_subject>(.*?))\\t(?<not_valid_before>(.*?))\\t(?<not_valid_after>(.*?))\\t(?<last_alert>(.*?))\\t(?<client_subject>(.*?))\\t(?<client_issuer_subject>(.*?))\\t(?<cert_hash>(.*?))\\t(?<validation_status>(.*))\" ]\n\t\t}\n\t }\n\n\t# bro_weird ######################\n\tif [type] == \"bro_weird\" {\n\t\tgrok {\n\t\t\tmatch => [ \"message\", \"(?<ts>(.*?))\\t(?<uid>(.*?))\\t(?<id.orig_h>(.*?))\\t(?<id.orig_p>(.*?))\\t(?<id.resp_h>(.*?))\\t(?<id.resp_p>(.*?))\\t(?<name>(.*?))\\t(?<addl>(.*?))\\t(?<notice>(.*?))\\t(?<peer>(.*))\" ]\n\t\t\t}\n\t}\n\t\n\t# bro_x509 #######################\n\tif [type] == \"bro_x509\" {\n\t\tcsv {\n\t\n\t\t #x509.log:#fields\tts\tid\tcertificate.version\tcertificate.serial\tcertificate.subject\tcertificate.issuer\tcertificate.not_valid_before\tcertificate.not_valid_after\tcertificate.key_alg\tcertificate.sig_alg\tcertificate.key_type\tcertificate.key_length\tcertificate.exponent\tcertificate.curve\tsan.dns\tsan.uri\tsan.email\tsan.ip\tbasic_constraints.ca\tbasic_constraints.path_len\n\t\t columns => [\"ts\",\"id\",\"certificate.version\",\"certificate.serial\",\"certificate.subject\",\"icertificate.issuer\",\"certificate.not_valid_before\",\"certificate.not_valid_after\",\"certificate.key_alg\",\"certificate.sig_alg\",\"certificate.key_type\",\"certificate.key_length\",\"certificate.exponent\",\"certificate.curve\",\"san.dns\",\"san.uri\",\"san.email\",\"san.ip\",\"basic_constraints.ca\",\"basic_constraints.path_len\"]\n\t\n\t\t #if use custom delimiter, change following value in between quotes delimiter. otherwise, leave next line alone.\n\t\t separator => \"\t\"\n\t\t}\n\t\n\t\t#let's convert our timestamp 'ts' field, can use kibana features natively\n\t\tdate {\n\t\t match => [ \"ts\", \"unix\" ]\n\t\t}\n\t\n\t }\n\t\n\tif [type]== \"bro_intel\" {\n\t grok {\n\t\tmatch => [ \"message\", \"(?<ts>(.*?))\\t%{data:uid}\\t(?<id.orig_h>(.*?))\\t(?<id.orig_p>(.*?))\\t(?<id.resp_h>(.*?))\\t(?<id.resp_p>(.*?))\\t%{data:fuid}\\t%{data:file_mime_type}\\t%{data:file_desc}\\t(?<seen.indicator>(.*?))\\t(?<seen.indicator_type>(.*?))\\t(?<seen.where>(.*?))\\t%{notspace:sources}\" ]\n\t }\n }\n }\n date {\n\tmatch => [ \"ts\", \"unix\" ]\n }\n}\n\nfilter {\n if \"bro\" in [type] {\n\tif [id.orig_h] {\n\t mutate {\n\t\tadd_field => [ \"senderbase_lookup\", \"http://www.senderbase.org/lookup/?search_string=%{id.orig_h}\" ]\n\t\tadd_field => [ \"cbl_lookup\", \"http://cbl.abuseat.org/lookup.cgi?ip=%{id.orig_h}\" ]\n\t\tadd_field => [ \"spamhaus_lookup\", \"http://www.spamhaus.org/query/bl?ip=%{id.orig_h}\" ]\n\t }\n\t}\n\tmutate {\n\t add_tag => [ \"bro\" ]\n\t}\n\tmutate {\n\t convert => [ \"id.orig_p\", \"integer\" ]\n\t convert => [ \"id.resp_p\", \"integer\" ]\n\t convert => [ \"orig_bytes\", \"integer\" ]\n\t convert => [ \"resp_bytes\", \"integer\" ]\n\t convert => [ \"missed_bytes\", \"integer\" ]\n\t convert => [ \"orig_pkts\", \"integer\" ]\n\t convert => [ \"orig_ip_bytes\", \"integer\" ]\n\t convert => [ \"resp_pkts\", \"integer\" ]\n\t convert => [ \"resp_ip_bytes\", \"integer\" ]\n\t}\n }\n}\n\nfilter {\n if [type] == \"bro_conn\" {\n\t#the following makes use of translate filter (stash contrib) convert conn_state human text. saves having values packet introspection\n\ttranslate {\n\t field => \"conn_state\"\n\t destination => \"conn_state_full\"\n\t dictionary => [ \n\t\t\"s0\", \"connection attempt seen, no reply\",\n\t\t\"s1\", \"connection established, not terminated\",\n\t\t\"s2\", \"connection established , close attempt originator seen (but no reply responder)\",\n\t\t\"s3\", \"connection established , close attempt responder seen (but no reply originator)\",\n\t\t\"sf\", \"normal syn/fin completion\",\n\t\t\"rej\", \"connection attempt rejected\",\n\t\t\"rsto\", \"connection established, originator aborted (sent rst)\",\n\t\t\"rstr\", \"established, responder aborted\",\n\t\t\"rstos0\", \"originator sent syn followed rst, never saw syn-ack responder\",\n\t\t\"rstrh\", \"responder sent syn ack followed rst, never saw syn (purported) originator\",\n\t\t\"sh\", \"originator sent syn followed fin, never saw syn ack responder (hence connection 'half' open)\",\n\t\t\"shr\", \"responder sent syn ack followed fin, never saw syn originator\",\n\t\t\"oth\", \"no syn seen, midstream traffic (a 'partial connection' not later closed)\" \n\t ]\n\t}\n }\n}\n# resolve @source_host fqdn if possible if missing types of ging using source_host_ip above\nfilter {\n if [id.orig_h] {\n\tif ![id.orig_h-resolved] {\n\t mutate {\n\t\tadd_field => [ \"id.orig_h-resolved\", \"%{id.orig_h}\" ]\n\t }\n\t dns {\n\t\treverse => [ \"id.orig_h-resolved\" ]\n\t\taction => \"replace\"\n\t }\n\t}\n }\n}\nfilter {\n if [id.resp_h] {\n\tif ![id.resp_h-resolved] {\n\t mutate {\n\t\tadd_field => [ \"id.resp_h-resolved\", \"%{id.resp_h}\" ]\n\t }\n\t dns {\n\t\treverse => [ \"id.resp_h-resolved\" ]\n\t\taction => \"replace\"\n\t }\n\t}\n }\n}\n\noutput {\n elasticsearch {\n hosts => [\"localhost:9200\"]\n #sniffing => true\n manage_template => false\n index => \"%{[@metadata][beat]}-%{+yyyy.mm.dd}\"\n document_type => \"%{[@metadata][type]}\"\n }\n}\n\n", :reason=>"expected 1 of #, input, filter, output @ line 158, column 3 (byte 8746) after "}
to best of ability i've reviewed logstash configuration , can't see errors. can me figure out whats wrong it?
i'm running logstash.noarch 1:5.0.0-1 @elasticsearch elasticsearch.noarch 5.0.0-1 @elasticsearch
many thanks
if match open curly brace @ top of 20-bro-ids-filter.conf, you'll see matches close curly brace before date{} stanza. puts date{} outside filter{}, generating message it's expecting input{}, output{}, or filter{}.
Comments
Post a Comment