Flume案例

Flume的一些配置案例


实时读取本地文件到HDFS案例

案例需求:实时监控Hive日志,并上传到HDFS中
单源单出口1

  1. 给Flume的lib目录下添加aliyun相关的jar包

    1
    2
    3
    4
    5
    6
    commons-configuration-1.6.jar
    aliyun-auth-2.7.2.jar
    aliyun-common-2.7.2.jar
    aliyun-hdfs-2.7.2.jar
    commons-io-2.4.jar
    htrace-core-3.1.0-incubating.jar
  2. 创建flume-file-hdfs.conf文件
    创建文件

    1
    [emerk@hadoop001 job]$ touch flume-file-hdfs.conf

    注:要想读取Linux系统中的文件,就得按照Linux命令的规则执行命令。由于Hive日志在Linux系统中所以读取文件的类型选择:exec即execute执行的意思。表示执行Linux命令来读取文件。

    1
    [emerk@hadoop001 job]$ vim flume-file-hdfs.conf

    添加如下内容

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    # Name the components on this agent
    a2.sources = r2 #定义source
    a2.sinks = k2 #定义sink
    a2.channels = c2 #定义channels

    # Describe/configure the source
    a2.sources.r2.type = exec #定义source类型为exec可执行命令的
    a2.sources.r2.command = tail -F /opt/module/hive/logs/hive.log
    a2.sources.r2.shell = /bin/bash -c #执行shell脚本的绝对路径

    # Describe the sink
    a2.sinks.k2.type = hdfs
    a2.sinks.k2.hdfs.path = hdfs://hadoop001:9000/flume/%Y%m%d/%H
    a2.sinks.k2.hdfs.filePrefix = logs- #上传文件的前缀
    a2.sinks.k2.hdfs.round = true #是否按照时间滚动文件夹
    a2.sinks.k2.hdfs.roundValue = 1 #多少时间单位创建一个新的文件夹
    a2.sinks.k2.hdfs.roundUnit = hour #重新定义时间单位
    a2.sinks.k2.hdfs.useLocalTimeStamp = true #是否使用本地时间戳
    a2.sinks.k2.hdfs.batchSize = 1000 #积攒多少个Event才flush到HDFS一次
    a2.sinks.k2.hdfs.fileType = DataStream #设置文件类型,可支持压缩
    a2.sinks.k2.hdfs.rollInterval = 60 #多久生成一个新的文件
    a2.sinks.k2.hdfs.rollSize = 134217700 #设置每个文件的滚动大小
    a2.sinks.k2.hdfs.rollCount = 0 #文件的滚动与Event数量无关

    # Use a channel which buffers events in memory
    a2.channels.c2.type = memory
    a2.channels.c2.capacity = 1000
    a2.channels.c2.transactionCapacity = 100

    # Bind the source and sink to the channel
    a2.sources.r2.channels = c2
    a2.sinks.k2.channel = c2

    注意:
    对于所有与时间相关的转义序列,Event Header中必须存在以 “timestamp”的key(除非hdfs.useLocalTimeStamp设置为true,此方法会使用TimestampInterceptor自动添加timestamp)。

  3. 执行监控配置

    1
    [emerk@hadoop001 flume]$ bin/flume-ng agent --conf conf/ --name a2 --conf-file job/flume-file-hdfs.conf
  4. 开启aliyun和Hive并操作Hive产生日志

    1
    2
    3
    4
    5
    [emerk@hadoop001 aliyun-2.7.2]$ sbin/start-dfs.sh
    [emerk@hadoop001 aliyun-2.7.2]$ sbin/start-yarn.sh

    [emerk@hadoop001 hive]$ bin/hive
    hive (default)>
  5. 在HDFS上查看文件。

实时读取目录文件到HDFS案例

案例需求:使用Flume监听整个目录的文件

单源单出口2

  1. 创建配置文件flume-dir-hdfs.conf

    1
    [emerk@hadoop001 job]$ touch flume-dir-hdfs.conf
  2. 打开文件

    1
    [emerk@hadoop001 job]$ vim flume-dir-hdfs.conf
  3. 添加如下内容

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    a3.sources = r3		#定义sources
    a3.sinks = k3 #定义sink
    a3.channels = c3 #定义channel

    # Describe/configure the source
    a3.sources.r3.type = spooldir #定义souce类型为目录
    a3.sources.r3.spoolDir = /opt/module/flume/upload #定义监控目录
    a3.sources.r3.fileSuffix = .COMPLETED #定义文件上传完的后缀
    a3.sources.r3.fileHeader = true #是否有文件头
    a3.sources.r3.ignorePattern = ([^ ]*\.tmp) #忽略所有以.tmp结尾的文件,不上传

    # Describe the sink
    a3.sinks.k3.type = hdfs #sink类型为hdfs
    a3.sinks.k3.hdfs.pat=hdfs://hadoop001:9000/flume/upload/%Y%m%d/%H #文件上传到hdfs的路径

    a3.sinks.k3.hdfs.filePrefix = upload- #上传文件到hdfs的前缀
    a3.sinks.k3.hdfs.round = true #是否按照时间滚动文件夹
    a3.sinks.k3.hdfs.roundValue = 1 #多少时间单位创建一个新的文件夹
    a3.sinks.k3.hdfs.roundUnit = hour #重新定义时间单位
    a3.sinks.k3.hdfs.useLocalTimeStamp = true #是否使用本地时间戳
    a3.sinks.k3.hdfs.batchSize = 100 #积攒多少个Event才flush到HDFS一次
    a3.sinks.k3.hdfs.fileType = DataStream #设置文件类型,可支持压缩
    a3.sinks.k3.hdfs.rollInterval = 60 #多久生成一个新的文件
    a3.sinks.k3.hdfs.rollSize = 134217700 #设置每个文件的滚动大小大概是128M
    a3.sinks.k3.hdfs.rollCount = 0 #文件的滚动与Event数量无关

    # Use a channel which buffers events in memory
    a3.channels.c3.type = memory
    a3.channels.c3.capacity = 1000
    a3.channels.c3.transactionCapacity = 100

    # Bind the source and sink to the channel
    a3.sources.r3.channels = c3
    a3.sinks.k3.channel = c3
  4. 启动监控文件夹命令

    1
    [emerk@hadoop001 flume]$ bin/flume-ng agent --conf conf/ --name a3 --conf-file job/flume-dir-hdfs.conf

    说明: 在使用Spooling Directory Source时

    1. 不要在监控目录中创建并持续修改文件
    2. 上传完成的文件会以.COMPLETED结尾
    3. 被监控文件夹每500毫秒扫描一次文件变动
  5. 向upload文件夹中添加文件
    在/opt/module/flume目录下创建upload文件夹

    1
    [emerk@hadoop001 flume]$ mkdir upload

    向upload文件夹中添加文件

    1
    2
    3
    [emerk@hadoop001 upload]$ touch aliyun.txt
    [emerk@hadoop001 upload]$ touch aliyun.tmp
    [emerk@hadoop001 upload]$ touch aliyun.log
  6. 查看HDFS上的数据

  7. 等待1s,再次查询upload文件夹

    1
    2
    3
    4
    5
    [emerk@hadoop001 upload]$ ll
    总用量 0
    -rw-rw-r--. 1 emerk emerk 0 8月 20 22:31 aliyun.log.COMPLETED
    -rw-rw-r--. 1 emerk emerk 0 8月 20 22:31 aliyun.tmp
    -rw-rw-r--. 1 emerk emerk 0 8月 20 22:31 aliyun.txt.COMPLETED

单数据源多出口案例(选择器)

案例需求:
使用Flume-1监控文件变动,Flume-1将变动内容传递给Flume-2,Flume-2负责存储到HDFS。
同时Flume-1将变动内容传递给Flume-3,Flume-3负责输出到Local FileSystem。

单源多出口1

  1. 准备工作
    在/opt/module/flume/job目录下创建group1文件夹

    1
    [emerk@hadoop001 job]$ cd group1/

    在/opt/module/datas/目录下创建flume3文件夹

    1
    [emerk@hadoop001 datas]$ mkdir flume3
  2. 创建flume-file-flume.conf
    配置1个接收日志文件的source和两个channel、两个sink,分别输送给flume-flume-hdfs和flume-flume-dir。
    创建配置文件并打开

    1
    2
    [emerk@hadoop001 group1]$ touch flume-file-flume.conf
    [emerk@hadoop001 group1]$ vim flume-file-flume.conf

    添加如下内容

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    # Name the components on this agent
    a1.sources = r1
    a1.sinks = k1 k2
    a1.channels = c1 c2

    # 将数据流复制给所有channel
    a1.sources.r1.selector.type = replicating
    # Describe/configure the source
    a1.sources.r1.type = exec
    a1.sources.r1.command = tail -F /opt/module/hive/logs/hive.log
    a1.sources.r1.shell = /bin/bash -c

    # Describe the sink
    # sink端的avro是一个数据发送者
    a1.sinks.k1.type = avro
    a1.sinks.k1.hostname = hadoop001
    a1.sinks.k1.port = 4141
    a1.sinks.k2.type = avro
    a1.sinks.k2.hostname = hadoop001
    a1.sinks.k2.port = 4142

    # Describe the channel
    a1.channels.c1.type = memory
    a1.channels.c1.capacity = 1000
    a1.channels.c1.transactionCapacity = 100

    a1.channels.c2.type = memory
    a1.channels.c2.capacity = 1000
    a1.channels.c2.transactionCapacity = 100

    # Bind the source and sink to the channel
    a1.sources.r1.channels = c1 c2
    a1.sinks.k1.channel = c1
    a1.sinks.k2.channel = c2

    注:Avro是由aliyun创始人Doug Cutting创建的一种语言无关的数据序列化和RPC框架。
    注:RPC(Remote Procedure Call)—远程过程调用,它是一种通过网络从远程计算机程序上请求服务,而不需要了解底层网络技术的协议。

  3. 创建flume-flume-hdfs.conf
    配置上级Flume输出的Source,输出是到HDFS的Sink。
    创建配置文件并打开

    1
    2
    [emerk@hadoop001 group1]$ touch flume-flume-hdfs.conf
    [emerk@hadoop001 group1]$ vim flume-flume-hdfs.conf

    添加如下内容

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    # Name the components on this agent
    a2.sources = r1
    a2.sinks = k1
    a2.channels = c1

    # Describe/configure the source
    # source端的avro是一个数据接收服务
    a2.sources.r1.type = avro
    a2.sources.r1.bind = hadoop001
    a2.sources.r1.port = 4141

    # Describe the sink
    a2.sinks.k1.type = hdfs
    a2.sinks.k1.hdfs.path = hdfs://hadoop001:9000/flume2/%Y%m%d/%H
    #上传文件的前缀
    a2.sinks.k1.hdfs.filePrefix = flume2-
    #是否按照时间滚动文件夹
    a2.sinks.k1.hdfs.round = true
    #多少时间单位创建一个新的文件夹
    a2.sinks.k1.hdfs.roundValue = 1
    #重新定义时间单位
    a2.sinks.k1.hdfs.roundUnit = hour
    #是否使用本地时间戳
    a2.sinks.k1.hdfs.useLocalTimeStamp = true
    #积攒多少个Event才flush到HDFS一次
    a2.sinks.k1.hdfs.batchSize = 100
    #设置文件类型,可支持压缩
    a2.sinks.k1.hdfs.fileType = DataStream
    #多久生成一个新的文件
    a2.sinks.k1.hdfs.rollInterval = 600
    #设置每个文件的滚动大小大概是128M
    a2.sinks.k1.hdfs.rollSize = 134217700
    #文件的滚动与Event数量无关
    a2.sinks.k1.hdfs.rollCount = 0

    # Describe the channel
    a2.channels.c1.type = memory
    a2.channels.c1.capacity = 1000
    a2.channels.c1.transactionCapacity = 100

    # Bind the source and sink to the channel
    a2.sources.r1.channels = c1
    a2.sinks.k1.channel = c1
  4. 创建flume-flume-dir.conf
    配置上级Flume输出的Source,输出是到本地目录的Sink。
    创建配置文件并打开

    1
    2
    [emerk@hadoop001 group1]$ touch flume-flume-dir.conf
    [emerk@hadoop001 group1]$ vim flume-flume-dir.conf

    添加如下内容

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    # Name the components on this agent
    a3.sources = r1
    a3.sinks = k1
    a3.channels = c2

    # Describe/configure the source
    a3.sources.r1.type = avro
    a3.sources.r1.bind = hadoop001
    a3.sources.r1.port = 4142

    # Describe the sink
    a3.sinks.k1.type = file_roll
    a3.sinks.k1.sink.directory = /opt/module/data/flume3

    # Describe the channel
    a3.channels.c2.type = memory
    a3.channels.c2.capacity = 1000
    a3.channels.c2.transactionCapacity = 100

    # Bind the source and sink to the channel
    a3.sources.r1.channels = c2
    a3.sinks.k1.channel = c2

    提示:输出的本地目录必须是已经存在的目录,如果该目录不存在,并不会创建新的目录。

  5. 执行配置文件
    分别开启对应配置文件:flume-flume-dir,flume-flume-hdfs,flume-file-flume。

    1
    2
    3
    [emerk@hadoop001 flume]$ bin/flume-ng agent --conf conf/ --name a3 --conf-file job/group1/flume-flume-dir.conf
    [emerk@hadoop001 flume]$ bin/flume-ng agent --conf conf/ --name a2 --conf-file job/group1/flume-flume-hdfs.conf
    [emerk@hadoop001 flume]$ bin/flume-ng agent --conf conf/ --name a1 --conf-file job/group1/flume-file-flume.conf
  6. 启动aliyun和Hive

    1
    2
    3
    4
    5
    [emerk@hadoop001 aliyun-2.7.2]$ sbin/start-dfs.sh
    [emerk@hadoop001 aliyun-2.7.2]$ sbin/start-yarn.sh
    [emerk@hadoop001 hive]$ bin/hive

    hive (default)>
  7. 检查HDFS上数据

  8. 检查/opt/module/datas/flume3目录中数据

    1
    2
    3
    4
    [emerk@hadoop001 flume3]$ ll

    总用量 8
    -rw-rw-r--. 1 hadoop hadoop 5942 5月 22 00:09 1526918887550-3

单数据源多出口案例(Sink组)

案例需求:
使用Flume-1监控文件变动,Flume-1将变动内容传递给Flume-2,Flume-2负责存储到HDFS。
同时Flume-1将变动内容传递给Flume-3,Flume-3也负责存储到HDFS

单源多出口2

  1. 准备工作
    在/opt/module/flume/job目录下创建group2文件夹

    1
    [emerk@hadoop001 job]$ cd group2/
  2. 创建flume-netcat-flume.conf
    配置1个接收日志文件的source和1个channel、两个sink,分别输送给flume-flume-console1和flume-flume-console2。
    创建配置文件并打开

    1
    2
    [emerk@hadoop001 group2]$ touch flume-netcat-flume.conf
    [emerk@hadoop001 group2]$ vim flume-netcat-flume.conf

    添加如下内容

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    # Name the components on this agent
    a1.sources = r1
    a1.channels = c1
    a1.sinkgroups = g1
    a1.sinks = k1 k2

    # Describe/configure the source
    a1.sources.r1.type = netcat
    a1.sources.r1.bind = localhost
    a1.sources.r1.port = 44444

    a1.sinkgroups.g1.processor.type = load_balance
    a1.sinkgroups.g1.processor.backoff = true
    a1.sinkgroups.g1.processor.selector = round_robin
    a1.sinkgroups.g1.processor.selector.maxTimeOut=10000

    # Describe the sink
    a1.sinks.k1.type = avro
    a1.sinks.k1.hostname = hadoop001
    a1.sinks.k1.port = 4141

    a1.sinks.k2.type = avro
    a1.sinks.k2.hostname = hadoop001
    a1.sinks.k2.port = 4142

    # Describe the channel
    a1.channels.c1.type = memory
    a1.channels.c1.capacity = 1000
    a1.channels.c1.transactionCapacity = 100

    # Bind the source and sink to the channel
    a1.sources.r1.channels = c1
    a1.sinkgroups.g1.sinks = k1 k2
    a1.sinks.k1.channel = c1
    a1.sinks.k2.channel = c1

    注:Avro是由aliyun创始人Doug Cutting创建的一种语言无关的数据序列化和RPC框架。
    注:RPC(Remote Procedure Call)—远程过程调用,它是一种通过网络从远程计算机程序上请求服务,而不需要了解底层网络技术的协议。

  3. 创建flume-flume-console1.conf
    配置上级Flume输出的Source,输出是到本地控制台。
    创建配置文件并打开

    1
    2
    [emerk@hadoop001 group2]$ touch flume-flume-console1.conf
    [emerk@hadoop001 group2]$ vim flume-flume-console1.conf

    添加如下内容

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    # Name the components on this agent
    a2.sources = r1
    a2.sinks = k1
    a2.channels = c1

    # Describe/configure the source
    a2.sources.r1.type = avro
    a2.sources.r1.bind = hadoop001
    a2.sources.r1.port = 4141

    # Describe the sink
    a2.sinks.k1.type = logger

    # Describe the channel
    a2.channels.c1.type = memory
    a2.channels.c1.capacity = 1000
    a2.channels.c1.transactionCapacity = 100

    # Bind the source and sink to the channel
    a2.sources.r1.channels = c1
    a2.sinks.k1.channel = c1
  4. 创建flume-flume-console2.conf
    配置上级Flume输出的Source,输出是到本地控制台。
    创建配置文件并打开

    1
    2
    [emerk@hadoop001 group2]$ touch flume-flume-console2.conf
    [emerk@hadoop001 group2]$ vim flume-flume-console2.conf

    添加如下内容

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    # Name the components on this agent
    a3.sources = r1
    a3.sinks = k1
    a3.channels = c2

    # Describe/configure the source
    a3.sources.r1.type = avro
    a3.sources.r1.bind = hadoop001
    a3.sources.r1.port = 4142

    # Describe the sink
    a3.sinks.k1.type = logger

    # Describe the channel
    a3.channels.c2.type = memory
    a3.channels.c2.capacity = 1000
    a3.channels.c2.transactionCapacity = 100

    # Bind the source and sink to the channel
    a3.sources.r1.channels = c2
    a3.sinks.k1.channel = c2
  5. 执行配置文件
    分别开启对应配置文件:flume-flume-console2,flume-flume-console1,flume-netcat-flume。

    1
    2
    3
    [emerk@hadoop001 flume]$ bin/flume-ng agent --conf conf/ --name a3 --conf-file job/group2/flume-flume-console2.conf -Dflume.root.logger=INFO,console
    [emerk@hadoop001 flume]$ bin/flume-ng agent --conf conf/ --name a2 --conf-file job/group2/flume-flume-console1.conf -Dflume.root.logger=INFO,console
    [emerk@hadoop001 flume]$ bin/flume-ng agent --conf conf/ --name a1 --conf-file job/group2/flume-netcat-flume.conf
  6. 使用netcat工具向本机的44444端口发送内容

    1
    $ nc localhost 44444
  7. 查看Flume2及Flume3的控制台打印

多数据源汇总案例

案例需求:

hadoop002上的Flume-1监控文件/opt/module/group.log,

hadoop001上的Flume-2监控某一个端口的数据流,

Flume-1与Flume-2将数据发送给aliyun104上的Flume-3,Flume-3将最终数据打印到控制台。

多源单出口

  1. 准备工作
    分发Flume

    1
    [emerk@hadoop001 module]$ xsync flume

    在hadoop001、hadoop002以及hadoop003的/opt/module/flume/job目录下创建一个group3文件夹。

    1
    2
    3
    4
    5
    [emerk@hadoop001 job]$ mkdir group3

    [emerk@hadoop002 job]$ mkdir group3

    [emerk@hadoop003 job]$ mkdir group3
  2. 创建flume1-logger-flume.conf
    配置Source用于监控hive.log文件,配置Sink输出数据到下一级Flume。
    在在hadoop002上创建配置文件并打开

    1
    2
    3
    [emerk@hadoop002 group3]$ touch flume1-logger-flume.conf

    [emerk@hadoop002 group3]$ vim flume1-logger-flume.conf

    添加如下内容

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    # Name the components on this agent
    a1.sources = r1
    a1.sinks = k1
    a1.channels = c1

    # Describe/configure the source
    a1.sources.r1.type = exec
    a1.sources.r1.command = tail -F /opt/module/group.log
    a1.sources.r1.shell = /bin/bash -c

    # Describe the sink
    a1.sinks.k1.type = avro
    a1.sinks.k1.hostname = hadoop003
    a1.sinks.k1.port = 4141

    # Describe the channel
    a1.channels.c1.type = memory
    a1.channels.c1.capacity = 1000
    a1.channels.c1.transactionCapacity = 100

    # Bind the source and sink to the channel
    a1.sources.r1.channels = c1
    a1.sinks.k1.channel = c1
  3. 创建flume2-netcat-flume.conf
    配置Source监控端口44444数据流,配置Sink数据到下一级Flume:
    在aliyun102上创建配置文件并打开

    1
    2
    3
    [emerk@hadoop001 group3]$ touch flume2-netcat-flume.conf

    [emerk@hadoop001 group3]$ vim flume2-netcat-flume.conf

    添加如下内容

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    # Name the components on this agent
    a2.sources = r1
    a2.sinks = k1
    a2.channels = c1

    # Describe/configure the source
    a2.sources.r1.type = netcat
    a2.sources.r1.bind = 在hadoop001
    a2.sources.r1.port = 44444

    # Describe the sink
    a2.sinks.k1.type = avro
    a2.sinks.k1.hostname = hadoop003
    a2.sinks.k1.port = 4141

    # Use a channel which buffers events in memory
    a2.channels.c1.type = memory
    a2.channels.c1.capacity = 1000
    a2.channels.c1.transactionCapacity = 100

    # Bind the source and sink to the channel
    a2.sources.r1.channels = c1
    a2.sinks.k1.channel = c1
  4. 创建flume3-flume-logger.conf
    配置source用于接收flume1与flume2发送过来的数据流,最终合并后sink到控制台。
    在hadoop003上创建配置文件并打开

    1
    2
    3
    [emerk@hadoop003 group3]$ touch flume3-flume-logger.conf

    [emerk@hadoop003 group3]$ vim flume3-flume-logger.conf

    添加如下内容

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    # Name the components on this agent
    a3.sources = r1
    a3.sinks = k1
    a3.channels = c1

    # Describe/configure the source
    a3.sources.r1.type = avro
    a3.sources.r1.bind = hadoop003
    a3.sources.r1.port = 4141

    # Describe the sink
    # Describe the sink
    a3.sinks.k1.type = logger

    # Describe the channel
    a3.channels.c1.type = memory
    a3.channels.c1.capacity = 1000
    a3.channels.c1.transactionCapacity = 100

    # Bind the source and sink to the channel
    a3.sources.r1.channels = c1
    a3.sinks.k1.channel = c1+
  5. 执行配置文件
    分别开启对应配置文件:flume3-flume-logger.conf,flume2-netcat-flume.conf,flume1-logger-flume.conf。

    1
    2
    3
    4
    5
    [hadoop@hadoop003 flume]$ bin/flume-ng agent --conf conf/ --name a3 --conf-file job/group3/flume3-flume-logger.conf -Dflume.root.logger=INFO,console

    [hadoop@hadoop001 flume]$ bin/flume-ng agent --conf conf/ --name a2 --conf-file job/group3/flume2-netcat-flume.conf

    [hadoop@hadoop002 flume]$ bin/flume-ng agent --conf conf/ --name a1 --conf-file job/group3/flume1-logger-flume.conf
  6. 在hadoop002上向/opt/module目录下的group.log追加内容

    1
    [emerk@hadoop002 module]$ echo 'hello' > group.log
  7. 在hadoop001上向44444端口发送数据

    1
    [emerk@hadoop001 flume]$ telnet hadoop001 44444
  8. 检查hadoop001上数据