Flume 读取JMS 消息队列消息,并将消息写入HDFS

Posted blfbuaa

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Flume 读取JMS 消息队列消息,并将消息写入HDFS相关的知识,希望对你有一定的参考价值。

利用Apache Flume 读取JMS 消息队列消息。并将消息写入HDFS,flume agent配置例如以下:

flume-agent.conf

  #name the  components on this agent
  agentHdfs.sources  = jms_source
  agentHdfs.sinks =  hdfs_sink
  agentHdfs.channels  = mem_channel


  #  Describe/configure the source

 agentHdfs.sources.jms_source.type  = jms
# Bind to all interfaces
agentHdfs.sources.jms_source.initialContextFactory = org.apache.activemq.jndi.ActiveMQInitialContextFactory
agentHdfs.sources.jms_source.connectionFactory = ConnectionFactory
agentHdfs.sources.jms_source.destinationName = BUSINESS_DATA  #AMQ queue
agentHdfs.sources.jms_source.providerURL = tcp://hadoop-master:61616
agentHdfs.sources.jms_source.destinationType = QUEUE



# Describe  the sink
agentHdfs.sinks.hdfs_sink.type = hdfs
agentHdfs.sinks.hdfs_sink.hdfs.path hdfs://hadoop-master/data/flume/%Y-%m-%d/%H
agentHdfs.sinks.hdfs_sink.hdfs.filePrefix = %{hostname}/events-
agentHdfs.sinks.hdfs_sink.hdfs.maxOpenFiles = 5000
agentHdfs.sinks.hdfs_sink.hdfs.batchSize= 500
agentHdfs.sinks.hdfs_sink.hdfs.fileType = DataStream
agentHdfs.sinks.hdfs_sink.hdfs.writeFormat =Text
agentHdfs.sinks.hdfs_sink.hdfs.rollSize = 0
agentHdfs.sinks.hdfs_sink.hdfs.rollCount = 1000000
agentHdfs.sinks.hdfs_sink.hdfs.rollInterval = 600
agentHdfs.sinks.hdfs_sink.hdfs.useLocalTimeStamp = true



# Use a  channel which buffers events in memory

agentHdfs.channels.mem_channel.type  = memory
agentHdfs.channels.mem_channel.capacity  = 1000
agentHdfs.channels.mem_channel.transactionCapacity  = 100

# Bind the  source and sink to the channel
agentHdfs.sources.jms_source.channels  = mem_channel
agentHdfs.sinks.hdfs_sink.channel  = mem_channel

以上是关于Flume 读取JMS 消息队列消息,并将消息写入HDFS的主要内容,如果未能解决你的问题,请参考以下文章

读取与从 JMS 队列复制消息..使用 logstash 到 ES

Oracle OSB 收集具有相同 ID 的消息并将它们发送到 JMS 队列

模拟或模拟消息队列 (JMS)

MQ消息队列—— Java消息服务接口(JMS)

带有 Spring JMS 的 Azure 接收多次接收相同的消息并将消息移动到 DLQ

发送到JMS队列的消息将仅由单个消费者使用?