How SpringBoot integrates Kafka tool classes
What is kafka?
Kafka is an open source stream processing platform developed by the Apache Software Foundation and written in Scala and Java. Kafka is a high-throughput distributed publish-subscribe messaging system that can process all action streaming data of consumers in the website. Such actions (web browsing, searches and other user actions) are a key factor in many social functions on the modern web. This data is typically addressed by processing logs and log aggregation due to throughput requirements. This is a feasible solution for log data and offline analysis systems like Hadoop, but requiring real-time processing constraints. The purpose of Kafka is to unify online and offline message processing through Hadoop's parallel loading mechanism, and to provide real-time messages through the cluster.
Application scenarios
Message system: Kafka and traditional message systems (also called message middleware) both have system decoupling, redundant storage, and traffic peak shaving. , buffering, asynchronous communication, scalability, recoverability and other functions. At the same time, Kafka also provides message sequence guarantee and retroactive consumption functions that are difficult to achieve in most messaging systems.
Storage system: Kafka persists messages to disk, which effectively reduces the risk of data loss compared to other memory storage-based systems. It is precisely thanks to Kafka's message persistence function and multi-copy mechanism that we can use Kafka as a long-term data storage system. We only need to set the corresponding data retention policy to "permanent" or enable the topic's log compression function. That’s it.
Streaming processing platform: Kafka not only provides a reliable data source for each popular streaming framework, but also provides a complete streaming class library, such as windows, Various operations such as joins, transformations and aggregations.
Let’s take a look at the detailed code of SpringBoot integrating Kafka tool class.
pom.xml
<dependency> <groupId>org.apache.commons</groupId> <artifactId>commons-lang3</artifactId> <version>3.12.0</version> </dependency> <dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka-clients</artifactId> <version>2.6.3</version> </dependency> <dependency> <groupId>fastjson</groupId> <artifactId>fastjson</artifactId> <version>1.2.83</version> </dependency>
Tool Class
package com.bbl.demo.utils; import org.apache.commons.lang3.exception.ExceptionUtils; import org.apache.kafka.clients.admin.*; import org.apache.kafka.clients.consumer.ConsumerConfig; import org.apache.kafka.clients.consumer.ConsumerRecord; import org.apache.kafka.clients.consumer.ConsumerRecords; import org.apache.kafka.clients.consumer.KafkaConsumer; import org.apache.kafka.clients.producer.KafkaProducer; import org.apache.kafka.clients.producer.ProducerRecord; import org.apache.kafka.common.KafkaFuture; import org.apache.kafka.common.errors.TopicExistsException; import org.apache.kafka.common.errors.UnknownTopicOrPartitionException; import com.alibaba.fastjson.JSONObject; import java.time.Duration; import java.util.*; import java.util.concurrent.ExecutionException; public class KafkaUtils { private static AdminClient admin; /** * 私有静态方法,创建Kafka生产者 * @author o * @return KafkaProducer */ private static KafkaProducer<String, String> createProducer() { Properties props = new Properties(); //声明kafka的地址 props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,"node01:9092,node02:9092,node03:9092"); //0、1 和 all:0表示只要把消息发送出去就返回成功;1表示只要Leader收到消息就返回成功;all表示所有副本都写入数据成功才算成功 props.put("acks", "all"); //重试次数 props.put("retries", Integer.MAX_VALUE); //批处理的字节数 props.put("batch.size", 16384); //批处理的延迟时间,当批次数据未满之时等待的时间 props.put("linger.ms", 1); //用来约束KafkaProducer能够使用的内存缓冲的大小的,默认值32MB props.put("buffer.memory", 33554432); // properties.put("value.serializer", // "org.apache.kafka.common.serialization.ByteArraySerializer"); // properties.put("key.serializer", // "org.apache.kafka.common.serialization.ByteArraySerializer"); props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer"); props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer"); return new KafkaProducer<String, String>(props); } /** * 私有静态方法,创建Kafka消费者 * @author o * @return KafkaConsumer */ private static KafkaConsumer<String, String> createConsumer() { Properties props = new Properties(); //声明kafka的地址 props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,"node01:9092,node02:9092,node03:9092"); //每个消费者分配独立的消费者组编号 props.put("group.id", "111"); //如果value合法,则自动提交偏移量 props.put("enable.auto.commit", "true"); //设置多久一次更新被消费消息的偏移量 props.put("auto.commit.interval.ms", "1000"); //设置会话响应的时间,超过这个时间kafka可以选择放弃消费或者消费下一条消息 props.put("session.timeout.ms", "30000"); //自动重置offset props.put("auto.offset.reset","earliest"); // properties.put("value.serializer", // "org.apache.kafka.common.serialization.ByteArraySerializer"); // properties.put("key.serializer", // "org.apache.kafka.common.serialization.ByteArraySerializer"); props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); return new KafkaConsumer<String, String>(props); } /** * 私有静态方法,创建Kafka集群管理员对象 * @author o */ public static void createAdmin(String servers){ Properties props = new Properties(); props.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG,servers); admin = AdminClient.create(props); } /** * 私有静态方法,创建Kafka集群管理员对象 * @author o * @return AdminClient */ private static void createAdmin(){ createAdmin("node01:9092,node02:9092,node03:9092"); } /** * 传入kafka约定的topic,json格式字符串,发送给kafka集群 * @author o * @param topic * @param jsonMessage */ public static void sendMessage(String topic, String jsonMessage) { KafkaProducer<String, String> producer = createProducer(); producer.send(new ProducerRecord<String, String>(topic, jsonMessage)); producer.close(); } /** * 传入kafka约定的topic消费数据,用于测试,数据最终会输出到控制台上 * @author o * @param topic */ public static void consume(String topic) { KafkaConsumer<String, String> consumer = createConsumer(); consumer.subscribe(Arrays.asList(topic)); while (true) { ConsumerRecords<String, String> records = consumer.poll(Duration.ofSeconds(100)); for (ConsumerRecord<String, String> record : records){ System.out.printf("offset = %d, key = %s, value = %s",record.offset(), record.key(), record.value()); System.out.println(); } } } /** * 传入kafka约定的topic数组,消费数据 * @author o * @param topics */ public static void consume(String ... topics) { KafkaConsumer<String, String> consumer = createConsumer(); consumer.subscribe(Arrays.asList(topics)); while (true) { ConsumerRecords<String, String> records = consumer.poll(Duration.ofSeconds(100)); for (ConsumerRecord<String, String> record : records){ System.out.printf("offset = %d, key = %s, value = %s",record.offset(), record.key(), record.value()); System.out.println(); } } } /** * 传入kafka约定的topic,json格式字符串数组,发送给kafka集群 * 用于批量发送消息,性能较高。 * @author o * @param topic * @param jsonMessages * @throws InterruptedException */ public static void sendMessage(String topic, String... jsonMessages) throws InterruptedException { KafkaProducer<String, String> producer = createProducer(); for (String jsonMessage : jsonMessages) { producer.send(new ProducerRecord<String, String>(topic, jsonMessage)); } producer.close(); } /** * 传入kafka约定的topic,Map集合,内部转为json发送给kafka集群 <br> * 用于批量发送消息,性能较高。 * @author o * @param topic * @param mapMessageToJSONForArray */ public static void sendMessage(String topic, List<Map<Object, Object>> mapMessageToJSONForArray) { KafkaProducer<String, String> producer = createProducer(); for (Map<Object, Object> mapMessageToJSON : mapMessageToJSONForArray) { String array = JSONObject.toJSON(mapMessageToJSON).toString(); producer.send(new ProducerRecord<String, String>(topic, array)); } producer.close(); } /** * 传入kafka约定的topic,Map,内部转为json发送给kafka集群 * @author o * @param topic * @param mapMessageToJSON */ public static void sendMessage(String topic, Map<Object, Object> mapMessageToJSON) { KafkaProducer<String, String> producer = createProducer(); String array = JSONObject.toJSON(mapMessageToJSON).toString(); producer.send(new ProducerRecord<String, String>(topic, array)); producer.close(); } /** * 创建主题 * @author o * @param name 主题的名称 * @param numPartitions 主题的分区数 * @param replicationFactor 主题的每个分区的副本因子 */ public static void createTopic(String name,int numPartitions,int replicationFactor){ if(admin == null) { createAdmin(); } Map<String, String> configs = new HashMap<>(); CreateTopicsResult result = admin.createTopics(Arrays.asList(new NewTopic(name, numPartitions, (short) replicationFactor).configs(configs))); //以下内容用于判断创建主题的结果 for (Map.Entry<String, KafkaFuture<Void>> entry : result.values().entrySet()) { try { entry.getValue().get(); System.out.println("topic "+entry.getKey()+" created"); } catch (InterruptedException | ExecutionException e) { if (ExceptionUtils.getRootCause(e) instanceof TopicExistsException) { System.out.println("topic "+entry.getKey()+" existed"); } } } } /** * 删除主题 * @author o * @param names 主题的名称 */ public static void deleteTopic(String name,String ... names){ if(admin == null) { createAdmin(); } Map<String, String> configs = new HashMap<>(); Collection<String> topics = Arrays.asList(names); topics.add(name); DeleteTopicsResult result = admin.deleteTopics(topics); //以下内容用于判断删除主题的结果 for (Map.Entry<String, KafkaFuture<Void>> entry : result.values().entrySet()) { try { entry.getValue().get(); System.out.println("topic "+entry.getKey()+" deleted"); } catch (InterruptedException | ExecutionException e) { if (ExceptionUtils.getRootCause(e) instanceof UnknownTopicOrPartitionException) { System.out.println("topic "+entry.getKey()+" not exist"); } } } } /** * 查看主题详情 * @author o * @param names 主题的名称 */ public static void describeTopic(String name,String ... names){ if(admin == null) { createAdmin(); } Map<String, String> configs = new HashMap<>(); Collection<String> topics = Arrays.asList(names); topics.add(name); DescribeTopicsResult result = admin.describeTopics(topics); //以下内容用于显示主题详情的结果 for (Map.Entry<String, KafkaFuture<TopicDescription>> entry : result.values().entrySet()) { try { entry.getValue().get(); System.out.println("topic "+entry.getKey()+" describe"); System.out.println("\t name: "+entry.getValue().get().name()); System.out.println("\t partitions: "); entry.getValue().get().partitions().stream().forEach(p-> { System.out.println("\t\t index: "+p.partition()); System.out.println("\t\t\t leader: "+p.leader()); System.out.println("\t\t\t replicas: "+p.replicas()); System.out.println("\t\t\t isr: "+p.isr()); }); System.out.println("\t internal: "+entry.getValue().get().isInternal()); } catch (InterruptedException | ExecutionException e) { if (ExceptionUtils.getRootCause(e) instanceof UnknownTopicOrPartitionException) { System.out.println("topic "+entry.getKey()+" not exist"); } } } } /** * 查看主题列表 * @author o * @return Set<String> TopicList */ public static Set<String> listTopic(){ if(admin == null) { createAdmin(); } ListTopicsResult result = admin.listTopics(); try { result.names().get().stream().map(x->x+"\t").forEach(System.out::print); return result.names().get(); } catch (InterruptedException | ExecutionException e) { e.printStackTrace(); return null; } } public static void main(String[] args) { System.out.println(listTopic()); } }
The above is the detailed content of How SpringBoot integrates Kafka tool classes. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

With the development of the Internet and technology, digital investment has become a topic of increasing concern. Many investors continue to explore and study investment strategies, hoping to obtain a higher return on investment. In stock trading, real-time stock analysis is very important for decision-making, and the use of Kafka real-time message queue and PHP technology is an efficient and practical means. 1. Introduction to Kafka Kafka is a high-throughput distributed publish and subscribe messaging system developed by LinkedIn. The main features of Kafka are

SpringBoot and SpringMVC are both commonly used frameworks in Java development, but there are some obvious differences between them. This article will explore the features and uses of these two frameworks and compare their differences. First, let's learn about SpringBoot. SpringBoot was developed by the Pivotal team to simplify the creation and deployment of applications based on the Spring framework. It provides a fast, lightweight way to build stand-alone, executable

How to use React and Apache Kafka to build real-time data processing applications Introduction: With the rise of big data and real-time data processing, building real-time data processing applications has become the pursuit of many developers. The combination of React, a popular front-end framework, and Apache Kafka, a high-performance distributed messaging system, can help us build real-time data processing applications. This article will introduce how to use React and Apache Kafka to build real-time data processing applications, and

This article will write a detailed example to talk about the actual development of dubbo+nacos+Spring Boot. This article will not cover too much theoretical knowledge, but will write the simplest example to illustrate how dubbo can be integrated with nacos to quickly build a development environment.

Five options for Kafka visualization tools ApacheKafka is a distributed stream processing platform capable of processing large amounts of real-time data. It is widely used to build real-time data pipelines, message queues, and event-driven applications. Kafka's visualization tools can help users monitor and manage Kafka clusters and better understand Kafka data flows. The following is an introduction to five popular Kafka visualization tools: ConfluentControlCenterConfluent

How to choose the right Kafka visualization tool? Comparative analysis of five tools Introduction: Kafka is a high-performance, high-throughput distributed message queue system that is widely used in the field of big data. With the popularity of Kafka, more and more enterprises and developers need a visual tool to easily monitor and manage Kafka clusters. This article will introduce five commonly used Kafka visualization tools and compare their features and functions to help readers choose the tool that suits their needs. 1. KafkaManager

In recent years, with the rise of big data and active open source communities, more and more enterprises have begun to look for high-performance interactive data processing systems to meet the growing data needs. In this wave of technology upgrades, go-zero and Kafka+Avro are being paid attention to and adopted by more and more enterprises. go-zero is a microservice framework developed based on the Golang language. It has the characteristics of high performance, ease of use, easy expansion, and easy maintenance. It is designed to help enterprises quickly build efficient microservice application systems. its rapid growth

To install ApacheKafka on RockyLinux, you can follow the following steps: Update system: First, make sure your RockyLinux system is up to date, execute the following command to update the system package: sudoyumupdate Install Java: ApacheKafka depends on Java, so you need to install JavaDevelopmentKit (JDK) first ). OpenJDK can be installed through the following command: sudoyuminstalljava-1.8.0-openjdk-devel Download and decompress: Visit the ApacheKafka official website () to download the latest binary package. Choose a stable version
