当前位置: 首页 > news >正文

【云原生 | 60】Docker中通过docker-compose部署kafka集群

🍁博主简介
        🏅云计算领域优质创作者
        🏅2022年CSDN新星计划python赛道第一名

        🏅2022年CSDN原力计划优质作者
        🏅阿里云ACE认证高级工程师
        🏅阿里云开发者社区专家博主

💊交流社区:CSDN云计算交流社区欢迎您的加入!

1、环境准备

1.1 安装docker

1.2 安装Docker Compose

2、docker-compose.yaml文件配置

3、system-config.properties文件配置

4、启动服务


1、环境准备

  • 部署服务器的ip

  • 可用的9093 9094 9095 2181端口

  • docker和docker-compose

1.1 安装docker

卸载旧版本(可选)

  • 如果之前安装过旧版本的Docker,可以使用以下命令卸载:

sudo yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-selinux docker-engine-selinux docker-engine docker-ce

安装依赖

  • 安装yum工具和相关依赖:

sudo yum install -y yum-utils device-mapper-persistent-data lvm2

设置Docker仓库

  • 添加Docker CE的官方yum仓库,为了更快的下载速度,可以使用国内的镜像源,例如阿里云:

sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
  • 或者使用清华大学源:

sudo yum-config-manager --add-repo https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/docker-ce.repo

安装Docker CE

  • 更新yum缓存并安装Docker CE:

sudo yum makecache fast  
sudo yum install -y docker-ce

启动Docker

  • 启动Docker服务:

sudo systemctl start docker

验证Docker安装

  • 可以通过运行以下命令来验证Docker是否成功安装:

sudo docker --version

1.2 安装Docker Compose

下载Docker Compose

  • 使用curl命令从GitHub下载Docker Compose的二进制文件到/usr/local/bin/目录下(请检查版本是否最新):

sudo curl -L "https://github.com/docker/compose/releases/download/v2.6.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
  • 注意:上述命令中的版本号v2.6.0可能不是最新的,请访问Docker Compose的GitHub页面查看最新版本号。

赋予执行权限

  • 为Docker Compose二进制文件赋予执行权限:

sudo chmod +x /usr/local/bin/docker-compose

验证Docker Compose安装

  • 通过运行以下命令来验证Docker Compose是否成功安装:

docker-compose --version

2、docker-compose.yaml文件配置

文件内容如下:

version: '2'
services:zookeeper:image: wurstmeister/zookeeperrestart: alwaysports:- "2181:2181"kafka1:image: wurstmeister/kafkarestart: alwaysdepends_on:- zookeeperports:- "9093:9093"environment:KAFKA_ADVERTISED_HOST_NAME: kafka1KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181KAFKA_LISTENERS: PLAINTEXT://:9093KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.202.219:9093KAFKA_BROKER_ID: 1volumes:- /var/run/docker.sock:/var/run/docker.sockkafka2:image: wurstmeister/kafkarestart: alwaysdepends_on:- zookeeperports:- "9094:9094"environment:KAFKA_ADVERTISED_HOST_NAME: kafka2KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181KAFKA_LISTENERS: PLAINTEXT://:9094KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.202.219:9094KAFKA_BROKER_ID: 2volumes:- /var/run/docker.sock:/var/run/docker.sockkafka3:image: wurstmeister/kafkarestart: alwaysdepends_on:- zookeeperports:- 9095:9095environment:KAFKA_ADVERTISED_HOST_NAME: kafka3KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181KAFKA_LISTENERS: PLAINTEXT://:9095KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.202.219:9095KAFKA_BROKER_ID: 3volumes:- /var/run/docker.sock:/var/run/docker.sockeagle:image: nickzurich/efak:3.0.1volumes: # 挂载目录- ./system-config.properties:/opt/efak/conf/system-config.propertiesenvironment: # 配置参数EFAK_CLUSTER_ZK_LIST: zookeeper:2181depends_on:- zookeeperports:- "8048:8048"

修改文件中KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.202.219:9093的ip为服务器ip(有三处)

3、system-config.properties文件配置

######################################
# multi zookeeper & kafka cluster list
# Settings prefixed with 'kafka.eagle.' will be deprecated, use 'efak.' instead
######################################
efak.zk.cluster.alias=cluster
cluster.zk.list=zookeeper:2181
​
######################################
# zookeeper enable acl
######################################
cluster.zk.acl.enable=false
cluster.zk.acl.schema=digest
cluster.zk.acl.username=test
cluster.zk.acl.password=test123
​
######################################
# kraft broker
######################################
efak.kafka.cluster.alias=cluster
​
######################################
# broker size online list
######################################
cluster.efak.broker.size=1
​
######################################
# zk client thread limit
# Zookeeper cluster allows the number of clients to connect to
######################################
kafka.zk.limit.size=25
​
######################################
# EFAK webui port
######################################
efak.webui.port=8048
​
######################################
# kafka jmx acl and ssl authenticate
######################################
cluster.efak.jmx.acl=false
cluster.efak.jmx.user=keadmin
cluster.efak.jmx.password=keadmin123
cluster.efak.jmx.ssl=false
cluster.efak.jmx.truststore.location=/Users/dengjie/workspace/ssl/certificates/kafka.truststore
cluster.efak.jmx.truststore.password=ke123456
​
######################################
# kafka offset storage
######################################
cluster.efak.offset.storage=kafka
​
# If offset is out of range occurs, enable this property -- Only suitable for kafka sql
efak.sql.fix.error=false
​
######################################
# kafka jmx uri
######################################
cluster.efak.jmx.uri=service:jmx:rmi:///jndi/rmi://%s/jmxrmi
​
######################################
# kafka metrics, 15 days by default
######################################
​
# Whether the Kafka performance monitoring diagram is enabled
efak.metrics.charts=false
​
# Kafka Eagle keeps data for 30 days by default
efak.metrics.retain=30
​
######################################
# kafka sql topic records max
######################################
efak.sql.topic.records.max=5000
efak.sql.topic.preview.records.max=10
efak.sql.worknode.port=8787
efak.sql.distributed.enable=FALSE
efak.sql.worknode.rpc.timeout=300000
efak.sql.worknode.fetch.threshold=5000
efak.sql.worknode.fetch.timeout=20000
efak.sql.worknode.server.path=/Users/dengjie/workspace/kafka-eagle-plus/kafka-eagle-common/src/main/resources/works
​
######################################
# delete kafka topic token
# Set to delete the topic token, so that administrators can have the right to delete
######################################
efak.topic.token=keadmin
​
######################################
# kafka sasl authenticate
######################################
cluster.efak.sasl.enable=false
cluster.efak.sasl.protocol=SASL_PLAINTEXT
cluster.efak.sasl.mechanism=SCRAM-SHA-256
cluster.efak.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin-secret";
# If not set, the value can be empty
cluster.efak.sasl.client.id=
# Add kafka cluster cgroups
cluster.efak.sasl.cgroup.enable=false
cluster.efak.sasl.cgroup.topics=kafka_ads01,kafka_ads02
​
######################################
# kafka jdbc driver address
# Default use sqlite to store data
######################################
efak.driver=org.sqlite.JDBC
# It is important to note that the '/hadoop/kafka-eagle/db' path must exist.
efak.url=jdbc:sqlite:/hadoop/efak/db/ke.db
efak.username=root
efak.password=smartloli

LICENSE

MIT License
​
Copyright (c) 2023 Salent Olivick
​
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
​
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
​
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

4、启动服务

sudo docker-compose up -d

进入eagle即可查看kafka状态http://127.0.0.1:8048/ 用户名密码是admin/123456

相关文章:

  • python-web应用程序-Django-From组件
  • jeecg dictText字典值
  • C++:栈(stack)、队列(queue)、优先级队列(priority_queue)
  • 【计算机毕设】基于SpringBoot的民宿在线预定平台设计与实现 - 源码免费(私信领取)
  • Java算法篇之二分查找模板
  • C++ Thread多线程并发记录(3)线程创建总结
  • 基础—SQL—DML(数据操作语言)修改和删除
  • 力扣----轮转数组
  • 重学java 61.IO流 ② 字节输出流
  • 【面试宝藏】Redis 常见面试题解析
  • 如何通过PHP语言实现远程控制多路照明
  • 利用BeanFactoryPostProcessor让Bean提前被创建
  • 汽车IVI中控开发入门及进阶(二十四):杰发科技AC8015
  • 高通Android 12/13实现USB拔出关机功能
  • 了解CSS中的link和@import引入CSS的区别
  • ECMAScript 6 学习之路 ( 四 ) String 字符串扩展
  • iBatis和MyBatis在使用ResultMap对应关系时的区别
  • Java 多线程编程之:notify 和 wait 用法
  • Javascript基础之Array数组API
  • JS数组方法汇总
  • JS学习笔记——闭包
  • JS正则表达式精简教程(JavaScript RegExp 对象)
  • laravel 用artisan创建自己的模板
  • mockjs让前端开发独立于后端
  • MySQL的数据类型
  • Netty 4.1 源代码学习:线程模型
  • node-glob通配符
  • Python - 闭包Closure
  • Sequelize 中文文档 v4 - Getting started - 入门
  • vue-router的history模式发布配置
  • vue从入门到进阶:计算属性computed与侦听器watch(三)
  • 短视频宝贝=慢?阿里巴巴工程师这样秒开短视频
  • 警报:线上事故之CountDownLatch的威力
  • 力扣(LeetCode)56
  • 面试题:给你个id,去拿到name,多叉树遍历
  • 普通函数和构造函数的区别
  • 算法系列——算法入门之递归分而治之思想的实现
  • 新书推荐|Windows黑客编程技术详解
  • 一起来学SpringBoot | 第十篇:使用Spring Cache集成Redis
  • 昨天1024程序员节,我故意写了个死循环~
  • ​LeetCode解法汇总2583. 二叉树中的第 K 大层和
  • ​如何在iOS手机上查看应用日志
  • ​探讨元宇宙和VR虚拟现实之间的区别​
  • ​无人机石油管道巡检方案新亮点:灵活准确又高效
  • #define与typedef区别
  • $().each和$.each的区别
  • $forceUpdate()函数
  • (1)SpringCloud 整合Python
  • (1)虚拟机的安装与使用,linux系统安装
  • (160)时序收敛--->(10)时序收敛十
  • (20)目标检测算法之YOLOv5计算预选框、详解anchor计算
  • (web自动化测试+python)1
  • (zt)最盛行的警世狂言(爆笑)
  • (二)学习JVM —— 垃圾回收机制
  • (二开)Flink 修改源码拓展 SQL 语法