前提:已配置好Redis集群,并设置的有统一的访问密码

架构是filebeat-->redis集群-->logstash->elasticsearch,需要修改filebeat的输出和logstash的输入值

filebeat地址:192.168.80.108

redis集群地址:192.168.80.107 ,采用的是伪集群的方式

1 filebeat配置

filebeat.inputs:
- type: log
enabled: true
paths:
- /usr/local/openresty/nginx/logs/host.access.log
fields:
log_source: messages - type: log
enabled: true
paths:
- /usr/local/openresty/nginx/logs/error.log
fields:
log_source: secure output.redis:
# Redis集群地址列表
hosts: ["192.168.80.107:7001","192.168.80.107:7002","192.168.80.107:7003","192.168.80.107:7004","192.168.80.107:7005","192.168.80.107:7006","192.168.80.107:7007","192.168.80.107:7008"]
# Redis集群key
key: messages_secure
password: foobar2000
# 集群模式下只能用第0数据库,填写其他的会报错
db: 0

2 redis端查看数据

登录:

# -h是地址,-p是端口,-c表示集群,-a是密码
/elk/redis/redis-4.0.1/src/redis-cli -h 192.168.80.107 -c -p 7001 -a foobar2000

查看:

redis 127.0.0.1:7000[0]> keys *    # 出现这个key了  说明fielebeat的数据已经传输到redis集群中了
1) "messages_secure"
redis 127.0.0.1:7000[0]> llen emessages_secure ##查看list长度
(integer) 2002
redis 127.0.0.1:7000[0]> lindex messages_secure 0 #查看相关数据

或者使用redis客户端RedisDesktopManager使用

发现一个问题,Redis集群中出现俩messages_secure,且存储的数据一模一样,这个问题还有待继续研究..

3 logstash配置

input {
redis {
host => "192.168.80.107"
port => 7001
password => foobar2000
data_type => "list"
key => "messages_secure"
db => 0
}
redis {
host => "192.168.80.107"
port => 7002
password => foobar2000
data_type => "list"
key => "messages_secure"
db => 0
}
redis {
host => "192.168.80.107"
port => 7003
password => foobar2000
data_type => "list"
key => "messages_secure"
db => 0
}
redis {
host => "192.168.80.107"
port => 7004
password => foobar2000
data_type => "list"
key => "messages_secure"
db => 0
}
redis {
host => "192.168.80.107"
port => 7005
password => foobar2000
data_type => "list"
key => "messages_secure"
db => 0
}
redis {
host => "192.168.80.107"
port => 7006
password => foobar2000
data_type => "list"
key => "messages_secure"
db => 0
}
redis {
host => "192.168.80.107"
port => 7007
password => foobar2000
data_type => "list"
key => "messages_secure"
db => 0
}
redis {
host => "192.168.80.107"
port => 7008
password => foobar2000
data_type => "list"
key => "messages_secure"
db => 0
}
redis {
batch_count => 1
host => "192.168.80.107"
port => 7001
password => foobar2000
data_type => "list"
key => "messages_secure"
db => 0
}
} # 输出到elasticsearch中,根据不同的日志来源创建不同的索引
output { if [fields][log_source] == 'messages' {
elasticsearch {
hosts => ["http://192.168.80.104:9200", "http://192.168.80.105:9200","http://192.168.80.106:9200"]
index => "messages-%{+YYYY.MM.dd}"
user => "elastic"
password => "elkstack123456"
}
} if [fields][log_source] == "secure" {
elasticsearch {
hosts => ["http://192.168.80.104:9200", "http://192.168.80.105:9200","http://192.168.80.106:9200"]
index => "secure-%{+YYYY.MM.dd}"
user => "elastic"
password => "elkstack123456"
}
} }

说明:

input的redis中,host默认是string,不能填写列表,所以需要把所有集群的地址都写上,

若是只写其中一个Redis集群节点的地址,,则会出现如下提示,同时logstash也无法从Redis集群中拉取数据

Redis connection problem {:exception=>#<Redis::CommandError: CROSSSLOT Keys in request don't hash to the same slot>}
Redis connection problem {:exception=>#<Redis::CommandError: MOVED 7928 192.168.80.107:7002>}

但是若把所有集群的地址都写上,虽然也会出现上述的俩提示,但是logstash能从Redis集群中拉取数据

4 问题

延伸的问题:因为Redis集群中存储俩messages_secure,导致logstash从Redis集群中拉取的数据是会有俩一模一样的,进而传输给Elasticsearch的数据

也是有重复的,在kibana上查看,每个记录均有两条

出现这个问题是因为filebeat存储到Redis集群的数据重复,有待上面问题的解决。

5 官方相关文档

host参数的值是string,不支持列表

Redis input pluginedit

  • Plugin version: v3.1.4
  • Released on: 2017-08-16
  • Changelog

For other versions, see the Versioned plugin docs.

Getting Helpedit

For questions about the plugin, open a topic in the Discuss forums. For bugs or feature requests, open an issue in Github. For the list of Elastic supported plugins, please consult the Elastic Support Matrix.

Descriptionedit

This input will read events from a Redis instance; it supports both Redis channels and lists. The list command (BLPOP) used by Logstash is supported in Redis v1.3.1+, and the channel commands used by Logstash are found in Redis v1.3.8+. While you may be able to make these Redis versions work, the best performance and stability will be found in more recent stable versions. Versions 2.6.0+ are recommended.

For more information about Redis, see http://redis.io/

batch_count note: If you use the batch_count setting, you must use a Redis version 2.6.0 or newer. Anything older does not support the operations used by batching.

Redis Input Configuration Optionsedit

This plugin supports the following configuration options plus the Common Options described later.

Setting Input type Required
batch_count number No
data_type string, one of ["list", "channel", "pattern_channel"] Yes
db number No
host string No
key string Yes
password password No
port number No
threads number No
timeout number No

Also see Common Options for a list of options supported by all input plugins.

batch_countedit

  • Value type is number
  • Default value is 125

The number of events to return from Redis using EVAL.

data_typeedit

  • This is a required setting.
  • Value can be any of: list, channel, pattern_channel
  • There is no default value for this setting.

Specify either list or channel. If data_type is list, then we will BLPOP the key. If data_type is channel, then we will SUBSCRIBE to the key. If data_type is pattern_channel, then we will PSUBSCRIBE to the key.

dbedit

  • Value type is number
  • Default value is 0

The Redis database number.

hostedit

  • Value type is string
  • Default value is "127.0.0.1"

The hostname of your Redis server.

keyedit

  • This is a required setting.
  • Value type is string
  • There is no default value for this setting.

The name of a Redis list or channel.

passwordedit

  • Value type is password
  • There is no default value for this setting.

Password to authenticate with. There is no authentication by default.

portedit

  • Value type is number
  • Default value is 6379

The port to connect on.

ssledit

  • Value type is boolean
  • Default value is false

Enable SSL support.

threadsedit

  • Value type is number
  • Default value is 1

timeoutedit

  • Value type is number
  • Default value is 5

Initial connection timeout in seconds.

Common Optionsedit

The following configuration options are supported by all input plugins:

Setting Input type Required
add_field hash No
codec codec No
enable_metric boolean No
id string No
tags array No
type string No

Detailsedit

add_fieldedit

  • Value type is hash
  • Default value is {}

Add a field to an event

codecedit

  • Value type is codec
  • Default value is "plain"

The codec used for input data. Input codecs are a convenient method for decoding your data before it enters the input, without needing a separate filter in your Logstash pipeline.

enable_metricedit

  • Value type is boolean
  • Default value is true

Disable or enable metric logging for this specific plugin instance by default we record all the metrics we can, but you can disable metrics collection for a specific plugin.

idedit

  • Value type is string
  • There is no default value for this setting.

Add a unique ID to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type, for example, if you have 2 redis inputs. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.

input {
redis {
id => "my_plugin_id"
}
}

tagsedit

  • Value type is array
  • There is no default value for this setting.

Add any number of arbitrary tags to your event.

This can help with processing later.

typeedit

  • Value type is string
  • There is no default value for this setting.

Add a type field to all events handled by this input.

Types are used mainly for filter activation.

The type is stored as part of the event itself, so you can also use the type to search for it in Kibana.

If you try to set a type on an event that already has one (for example when you send an event from a shipper to an indexer) then a new input will not override the existing type. A type set at the shipper stays with that event for its life even when sent to another Logstash server.

最新文章

  1. 【AutoMapper官方文档】DTO与Domin Model相互转换(上)
  2. PHP解决抢购、秒杀、抢楼、抽奖等阻塞式高并发库存防控超量的思路方法
  3. 合理利用gradle的占位符功能
  4. python入门练习题2
  5. mysql的粗略学习
  6. C/C++中的可变参函数
  7. 把CheckedListBoxControl设置为单选框
  8. Ubuntu 固态硬盘 4K对齐及启用 Trim,及其验证方法
  9. Objective-C中NSArray和NSMutableArray是如何使用的?
  10. ES聚合实例
  11. HDU-2571命运
  12. 易汇金在线支付接口实例。ecshop和shopex,shopnc,iwebshop下完美无错(最新)
  13. [OC] 添加 pch 文件
  14. 3种方法来在Linux电脑上查找文件
  15. XSS原理及防范
  16. [转载]linux下网卡漂移导致网络不可用
  17. jenkins持续集成:构建多个job同时执行
  18. CSS属性大全
  19. iKcamp|基于Koa2搭建Node.js实战(含视频)☞ 规范与部署
  20. 20155212 2016-2017-2 《Java程序设计》第7周学习总结

热门文章

  1. UMG里没有&quot;Prefab&quot;怎么办?
  2. sublime text 3设置
  3. JavaWeb_初识监听器Listener
  4. 通过Flink实现个推海量消息数据的实时统计
  5. set集合 ,深浅拷贝
  6. 【学习】SpringBoot之全局异常处理器
  7. jQuery file upload 服务端返回数据格式
  8. 系统句柄报too many files open
  9. 八、MD5加密并封装,并调用封装方法
  10. spring中常见注解描述