logstash-filter-csv
logstash-filter-csv copied to clipboard
"Attempting to install template"
I have a large csv file that was previously ingested fine with ES v2. logstash v5.2 using this csv filter ignores the csv file and conf file ("attempting to install template") upon loading.
Just doing the most simple test using this blog article https://qbox.io/blog/import-csv-elasticsearch-logstash-sincedb
fails with csv plugin using v5.2.1 - is this an issue with the csv filter? here is the console output using the csv filter. $ bin/logstash -f ~/logstash-5.2.1/bitcoin-data/btc.conf --verbose ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console. Sending Logstash's logs to /Users/greg/logstash-5.2.1/logs which is now configured via log4j2.properties [2017-03-06T12:44:16,305][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}} [2017-03-06T12:44:16,309][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"} [2017-03-06T12:44:16,439][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#<URI::HTTP:0x2d420e7e URL:http://localhost:9200/>} [2017-03-06T12:44:16,442][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil} [2017-03-06T12:44:16,502][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"default"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword"}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}} [2017-03-06T12:44:16,510][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<URI::HTTP:0x2ee0e36f URL:http://localhost:9200>]} [2017-03-06T12:44:16,517][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500} [2017-03-06T12:44:16,521][INFO ][logstash.pipeline ] Pipeline main started [2017-03-06T12:44:16,625][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
Please post all product and debugging questions on our forum. Your questions will reach our wider community members there, and if we confirm that there is a bug, then we can open a new issue here.
For all general issues, please provide the following details for fast resolution:
- Version:
- Operating System:
- Config File (if you have sensitive info, please remove it):
- Sample Data:
- Steps to Reproduce:
You'll have a better chance of getting help by posting on http://discuss.elastic.co. What you're seeing is probably not a bug in Logstash.