当前位置: 首页 > news >正文

k8s部署springcloud-alibaba项目

本文由个人总结,如需转载使用请标明原著及原文地址

本文需要一些知识储备,有一定的自学能力,有一定的自行解决问题的能力,不然直接看的话压力会比较大,建议有一定知识储备后作为提升来学

本文的前置条件是会docker,还要有两台以上的虚拟机,还不了解的可以先看我前一篇文章

centos8安装docker运行java文件_centos8 docker安装java8-CSDN博客

本文是我在完整搭建完整个系统后再这里的,所以有些步骤和当时排查的问题可能会有些遗忘,请谅解,有问题的部分可以在评论提问,我会回答并更新到文章中

本文涉及的技术有

k8s,flannel,nfs,mysql,nginx

nacos,sentinel,seata这几个是springcloud-alibaba的,如果不是部这种项目,这3个的部分可以跳过

我这使用的是springboot 2.7.18、spring-cloud 2021.0.9、spring-cloud-alibaba 2021.0.5.0、nacos 2.2.3、sentinel 1.8.6、seata1.6.1,如果你的版本和我不一样,自行去spring-cloud-alibaba官网查下你自己匹配的版本,下面部署的时候也对应的记得修改版本,不要就直接抄,版本不对可能会导致项目起不来

1. 安装k8s

我这弄的两台虚拟机的配置是

服务器名ip配置
k8smaster192.168.233.202G内存20G硬盘
k8snode1192.168.233.218G内存20G硬盘

node1的内存比较高是因为原本是设置2G内存,在部署到nacos的时候node1就因为内存不够而卡死了,然后加到了4G,然后我springcloud-alibaba部署了7个服务,又卡死了,所以加到了8G内存,现在完整部署完,node1实际使用内存4.7G,各位可以看自己情况调整

服务器名称可以通过以下命令进行修改

hostnamectl set-hostname xxxx

1.1 关闭防火墙、selinux、swap

master服务器和node服务器都要执行

关闭防火墙,k8s会默认使用iptables做访问限制,防火墙开着可能会阻止k8s集群中各节点的互相访问

systemctl stop firewalld && systemctl disable firewalld

关闭selinux

SELinux是一种安全子系统,它提供访问控制机制。在Kubernetes集群中,某些节点可能运行在CPU或内存受限的环境中,或者需要运行特定的安全策略。在这些情况下,可能需要配置SELinux以允许Kubernetes正常工作。但是,完全关闭SELinux(设置为permissive或disabled模式)通常用于简化安全策略,以避免潜在的安全问题。

setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

关闭swap

swap是linux系统的虚拟内存,k8s觉得虚拟内存会影响他的执行效率,所以k8s启动时会检测swap是否关闭,没关不让启动,但是启动参数里可以加参数在启动时忽略判断swap是否关闭,有兴趣可以自己查下,我这介绍按k8s关闭swap的搭建方法

sed -ri 's/.*swap.*/#&/' /etc/fstab

1.2 安装docker

master服务器和node服务器都要执行

centos8安装docker运行java文件_centos8 docker安装java8-CSDN博客

1.3 安装 kubelet、kubeadm、kubectl

master服务器和node服务器都要执行

kubeadm:搭建kube集群的东西

kubelet:kube中和docker交互创建对应容器的服务,旧版的kube-apiserver、kube-proxy、kube-scheduler等,那些的都会在kubelet里自动创建

kubectl:kube的控制台

我这使用的是1.18.0版本,如有需要自行修改

yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0

1.4 创建master节点

仅master节点执行

执行前请确保kubelet服务已经启动完毕,以下命令能查询kubelet的状态

systemctl status kubelet
 kubelet.service - kubelet: The Kubernetes Node AgentLoaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)Drop-In: /usr/lib/systemd/system/kubelet.service.d└─10-kubeadm.confActive: active (running) since Mon 2024-09-09 17:39:32 CST; 1 day 21h agoDocs: https://kubernetes.io/docs/Main PID: 798 (kubelet)Tasks: 19 (limit: 11363)Memory: 77.5MCGroup: /system.slice/kubelet.service└─798 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver>Sep 11 15:31:10 k8smaster kubelet[798]: E0911 15:31:10.396187     798 summary_sys_containers.go:47] Failed to get system container stats for "/system.slice/docker.service": failed to get cgro>
Sep 11 15:31:20 k8smaster kubelet[798]: E0911 15:31:20.420096     798 summary_sys_containers.go:47] Failed to get system container stats for "/system.slice/docker.service": failed to get cgro>

以下命令查看docker中是否创建好kube基础依赖的容器 

docker ps

这里随便cp了2个出来,有看到这俩,或者类似这俩的,应该就行了

CONTAINER ID   IMAGE                                               COMMAND                  CREATED        STATUS        PORTS                                       NAMES
d8cfa18ec556   registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 46 hours ago   Up 46 hours                                               k8s_POD_kube-apiserver-k8smaster_kube-system_24d2e38ac3ee7dbd192b59ae301f2958_1
59ca2ec352a8   43940c34f24f                                        "/usr/local/bin/kube…"   46 hours ago   Up 46 hours                                               k8s_kube-proxy_kube-proxy-jvp78_kube-system_2c89a685-e9d1-490a-b945-13a38eca665d_1

apiserver-advertise-address:当前服务器的静态ip,装好docker和kubelet,kubelet会自动在docker里运行kube-apiserver所以填当前服务器地址

倒数两行是service和pod的ip网段,最好不要修改,这个网段和flannel的默认值是匹配的,如果修改到时候flannel也要修改

kubeadm init \
--v=5 \
--apiserver-advertise-address=192.168.233.20 \
--image-repository=registry.aliyuncs.com/google_containers \
--kubernetes-version=v1.18.0 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16

 输出以下内容master节点就算搭建成功了

失败看1.6

Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.233.20:6443 --token 1aczra.6ttwnx7vkcvfvr26 \--discovery-token-ca-cert-hash sha256:85feeb8ca8b4a9161446732118ca87f5f995bcdcf3f13d4711cf9aff4d50360e

然后按照上面的提示,在master节点上执行这三行的内容,这是赋予本机的kubectl操作权限

  mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config

1.5 node节点加入集群

仅node节点执行

在master节点搭建完的时候,最下面会输出这段代码,直接复制到node节点上执行就能作为node节点加入到集群

kubeadm join 192.168.233.20:6443 --token 1aczra.6ttwnx7vkcvfvr26 \--discovery-token-ca-cert-hash sha256:85feeb8ca8b4a9161446732118ca87f5f995bcdcf3f13d4711cf9aff4d50360e

如果没记下来忘了,那么在master服务器上输入以下命令,可以再次输出加入k8s集群的代码

kubeadm token create --print-join-command

1.6 master节点创建失败的情况

仅master节点执行

先去排查kubelet服务是否启动成功,对应的docker容器是否创建了,1.4有查看的方法,根据报错信息去网上查解决方法

解决kubelet的问题后,要重新使用kubeadm init之前,先要重置kubeadm并删除上次init产生的垃圾文件

重置kubeadm

kubeadm reset

删除垃圾文件

rm -rf /etc/kubernetes/*
rm -rf $HOME/.kube/*

1.7 k8s部署flannel

仅master节点执行

到这一步,使用以下命令可以看到,我们的master节点还是处于NotReady状态,这时需要部署flannel来自动为创建的service和pod分配ip

kubectl get node

 我们直接下载k8s部署flannel的yml文件,然后用kubectl create命令执行

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl create -f kube-flannel.yml

我下的这版flannel会单独创建个namespace运行他的内容,用以下命令可以查看flannel pod的创建情况,创建完了后,再用kubectl get node 此时master节点的状态应该就变成Ready了

kubectl get pods -n kube-flannel

2. 部署nfs

nfs是文件共享服务,docker容器运行时,我们常把一些容器中的目录映射到宿主机的目录上,方便修改配置文件、查看日志等,但是使用了k8s我们可能会有很多的node节点,我们去每个服务器上做配置、看日志非常不方便,所以就用到了nfs文件共享服务,我们可以把所有node节点的容器目录映射到文件共享服务器上,这样我们修改配置文件或查看日志等操作,就可以在文件共享服务器上统一操作了,不用到各个node服务器上操作

2.1 搭建nfs服务

nfs服务搭建非常简单,网上普遍是直接在宿主机上搭建,我这使用的是用docker搭建的方法,你们不想用docker也能搜搜直接在宿主机上搭建的方法,也就两三步的事

nfs服务可以搭建在单独的服务器上,我这为了省事直接搭在master服务器上

下载nfs镜像

docker pull itsthenetwork/nfs-server-alpine

运行nfs容器,这是24年9月的运行方式,旧版的共享目录的全局变量不是NFS_EXPORTED_DIR是SHARE_DIR_NAME,docker ps 如果看到容器一直在重启,可以用docker logs nfs-server查看容器启动失败的原因,如果是缺少全局变量会直接把缺少的参数名输出出来,看参数名推测下该参数的用处,加在启动命令里就好了

docker run --privileged -d --name nfs-server --restart always -v /data/nfs:/nfs/shared -p 2049:2049 -e NFS_EXPORTED_DIR="/nfs/shared" itsthenetwork/nfs-server-alpine

2.2 搭建k8s nfs-client-provisioner

仅master节点执行

k8s创建statefulsets类型的服务时需要提供pvc作为存储空间,nfs-client-provisioner可以通过自动在创建映射到nfs服务的pvc供statefulsets使用

kubectl create -f nfs-deployment.yaml

nfs-deployment.yaml

apiVersion: v1
kind: ServiceAccount
metadata:name: nfs-client-provisioner
---
kind: Deployment
apiVersion: apps/v1
metadata:name: nfs-client-provisioner
spec:replicas: 1strategy:type: Recreateselector:matchLabels:app: nfs-client-provisionertemplate:metadata:labels:app: nfs-client-provisionerspec:serviceAccount: nfs-client-provisionercontainers:- name: nfs-client-provisionerimage: quay.io/external_storage/nfs-client-provisioner:latestvolumeMounts:- name: nfs-client-rootmountPath: /persistentvolumesenv:- name: PROVISIONER_NAMEvalue: fuseim.pri/ifs # 可改后面会用到- name: NFS_SERVERvalue: 192.168.233.20 # 搭建了nfs的服务器ip- name: NFS_PATHvalue: /nfs-client-provisioner # 存储pvc数据的主目录volumes:- name: nfs-client-rootnfs:server: 192.168.233.20 # 搭建了nfs的服务器ippath: /nfs-client-provisioner # 存储pvc数据的主目录
kubectl create -f nfs-class.yaml

nfs-class.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:name: managed-nfs-storage
provisioner: fuseim.pri/ifs # deployment的PROVISIONER_NAME
parameters:archiveOnDelete: "false"
kubectl create -f nfs-rbac.yaml

nfs-rbac.yaml

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]resources: ["persistentvolumes"]verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]resources: ["persistentvolumeclaims"]verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]resources: ["endpoints"]verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: ["storage.k8s.io"]resources: ["storageclasses"]verbs: ["get", "list", "watch"]
- apiGroups: [""]resources: ["events"]verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccountname: nfs-client-provisionernamespace: default
roleRef:kind: ClusterRolename: nfs-client-provisioner-runnerapiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: leader-locking-nfs-client-provisioner
rules:
- apiGroups: [""]resources: ["endpoints"]verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: leader-locking-nfs-client-provisioner
subjects:
- kind: ServiceAccountname: nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: default
roleRef:kind: Rolename: leader-locking-nfs-client-provisionerapiGroup: rbac.authorization.k8s.io

3. 部署mysql

3.1 k8s部署mysql8

仅master节点执行

kubectl create -f mysql-pro.yml

mysql-pro.yml

apiVersion: v1
kind: ConfigMap
metadata:name: mysql8-cnf
data:my.cnf: |[mysqld]# mysql的配置文件# Remove leading # and set to the amount of RAM for the most important data# cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%.# innodb_buffer_pool_size = 128M## Remove leading # to turn on a very important data integrity option: logging# changes to the binary log between backups.# log_bin## Remove leading # to set options mainly useful for reporting servers.# The server defaults are faster for transactions and fast SELECTs.# Adjust sizes as needed, experiment to find the optimal values.# join_buffer_size = 128M# sort_buffer_size = 2M# read_rnd_buffer_size = 2Mhost-cache-size=0skip-name-resolvedatadir=/var/lib/mysqlsocket=/var/run/mysqld/mysqld.socksecure-file-priv=/var/lib/mysql-filesuser=mysqlmysql_native_password=ONpid-file=/var/run/mysqld/mysqld.pid[client]socket=/var/run/mysqld/mysqld.sock!includedir /etc/mysql/conf.d/
---
apiVersion: v1
kind: ReplicationController
metadata:name: mysql-prolabels:name: mysql-pro
spec:replicas: 1selector:name: mysql-protemplate:metadata:labels:name: mysql-prospec:containers:- name: mysql-proimage: mysql:8ports:- containerPort: 3306volumeMounts:- name: mysql-datamountPath: /var/lib/mysql # 映射mysql目录实现持久化- name: mysql-mycnfmountPath: /etc/my.cnf # 映射配置文件subPath: "my.cnf"env:- name: MYSQL_ROOT_PASSWORDvalue: "123456" # root的密码
# 如果只创建1个数据库1个mysql账号,可以配置这几个参数,比较省事,如果按我的步骤做的话,删掉
#        - name: MYSQL_DATABASE
#          value: "demo2"
#        - name: MYSQL_USER
#          value: "demo2"
#        - name: MYSQL_PASSWORD
#          value: "123456"volumes:- name: mysql-data # 和volumeMounts的name要对应nfs:server: 192.168.233.20 # mysql数据储存的映射,用的nfs服务器,通过nfs服务器的映射,记得都要先创建好对应的目录path: /data/mysql-pro- name: mysql-mycnfconfigMap:name: mysql8-cnf # 配置文件用configMap的形式映射,就是最上面那个configMap
---
apiVersion: v1
kind: Service
metadata:name: mysql-prolabels:name: mysql-pro
spec:ports:- port: 3306targetPort: 3306nodePort: 30004 # 仅type: NodePort才加,范围是30000-32767,和别的service不要重复selector:name: mysql-protype: NodePort # 默认是ClusterIP,集群外无法访问,NodePort会提供外网访问的端口

3.2 创建nacos要用的数据库

仅master节点执行

nacos的储存方式可以使用file,我这使用的是数据库储存,以此为例,你们能自己创建需要的mysql账号、数据库、还有赋权

mysql创建好后用以下命令查看mysql pod的名字

kubectl get pods

 显示的结果

NAME                                             READY   STATUS    RESTARTS   AGE
mysql-pro-twkc9                                  1/1     Running   2          2d4h
nfs-client-provisioner-5864f57757-vc9g4          1/1     Running   2          2d4h

进入到mysql容器中 

kubectl exec -it mysql-pro-twkc9 /bin/bash

进去后左侧会变成bash-5.1#

输入mysql -uroot -p 回车,然后输入3.1yml里配置的root用户的密码,就能进到mysql了 

然后创建数据库,创建数据库用户,给数据库赋予数据库权限,这些就是mysql的一些基础操作了,我就不过多叙述了,其中要注意的是,我用的镜像是mysql8的默认密码使用的是caching_sha2_password进行加密,会导致navicat连不上mysql,所以创建用户时指定密码使用mysql_native_password类型,这步操作完就可以正常使用navicat连进去了,剩下的操作就可以在你熟悉的navicat里进行了

创建nacos数据库所需的表

/* https://github.com/alibaba/nacos/blob/master/distribution/conf/nacos-mysql.sql */CREATE TABLE IF NOT EXISTS `config_info` (`id`           bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id',`data_id`      varchar(255) NOT NULL COMMENT 'data_id',`group_id`     varchar(255)          DEFAULT NULL,`content`      longtext     NOT NULL COMMENT 'content',`md5`          varchar(32)           DEFAULT NULL COMMENT 'md5',`gmt_create`   datetime     NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',`gmt_modified` datetime     NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间',`src_user`     text COMMENT 'source user',`src_ip`       varchar(50)           DEFAULT NULL COMMENT 'source ip',`app_name`     varchar(128)          DEFAULT NULL,`tenant_id`    varchar(128)          DEFAULT '' COMMENT '租户字段',`c_desc`       varchar(256)          DEFAULT NULL,`c_use`        varchar(64)           DEFAULT NULL,`effect`       varchar(64)           DEFAULT NULL,`type`         varchar(64)           DEFAULT NULL,`c_schema`     text,PRIMARY KEY (`id`),UNIQUE KEY `uk_configinfo_datagrouptenant` (`data_id`,`group_id`,`tenant_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_info';CREATE TABLE IF NOT EXISTS `config_info_aggr` (`id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id',`data_id` varchar(255) NOT NULL COMMENT 'data_id',`group_id` varchar(255) NOT NULL COMMENT 'group_id',`datum_id` varchar(255) NOT NULL COMMENT 'datum_id',`content` longtext NOT NULL COMMENT '内容',`gmt_modified` datetime NOT NULL COMMENT '修改时间',`app_name` varchar(128) DEFAULT NULL,`tenant_id` varchar(128) DEFAULT '' COMMENT '租户字段',PRIMARY KEY (`id`),UNIQUE KEY `uk_configinfoaggr_datagrouptenantdatum` (`data_id`,`group_id`,`tenant_id`,`datum_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='增加租户字段';CREATE TABLE IF NOT EXISTS `config_info_beta` (`id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id',`data_id` varchar(255) NOT NULL COMMENT 'data_id',`group_id` varchar(128) NOT NULL COMMENT 'group_id',`app_name` varchar(128) DEFAULT NULL COMMENT 'app_name',`content` longtext NOT NULL COMMENT 'content',`beta_ips` varchar(1024) DEFAULT NULL COMMENT 'betaIps',`md5` varchar(32) DEFAULT NULL COMMENT 'md5',`gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',`gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间',`src_user` text COMMENT 'source user',`src_ip` varchar(50) DEFAULT NULL COMMENT 'source ip',`tenant_id` varchar(128) DEFAULT '' COMMENT '租户字段',PRIMARY KEY (`id`),UNIQUE KEY `uk_configinfobeta_datagrouptenant` (`data_id`,`group_id`,`tenant_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_info_beta';CREATE TABLE IF NOT EXISTS `config_info_tag` (`id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id',`data_id` varchar(255) NOT NULL COMMENT 'data_id',`group_id` varchar(128) NOT NULL COMMENT 'group_id',`tenant_id` varchar(128) DEFAULT '' COMMENT 'tenant_id',`tag_id` varchar(128) NOT NULL COMMENT 'tag_id',`app_name` varchar(128) DEFAULT NULL COMMENT 'app_name',`content` longtext NOT NULL COMMENT 'content',`md5` varchar(32) DEFAULT NULL COMMENT 'md5',`gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',`gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间',`src_user` text COMMENT 'source user',`src_ip` varchar(50) DEFAULT NULL COMMENT 'source ip',PRIMARY KEY (`id`),UNIQUE KEY `uk_configinfotag_datagrouptenanttag` (`data_id`,`group_id`,`tenant_id`,`tag_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_info_tag';CREATE TABLE IF NOT EXISTS `config_tags_relation` (`id` bigint(20) NOT NULL COMMENT 'id',`tag_name` varchar(128) NOT NULL COMMENT 'tag_name',`tag_type` varchar(64) DEFAULT NULL COMMENT 'tag_type',`data_id` varchar(255) NOT NULL COMMENT 'data_id',`group_id` varchar(128) NOT NULL COMMENT 'group_id',`tenant_id` varchar(128) DEFAULT '' COMMENT 'tenant_id',`nid` bigint(20) NOT NULL AUTO_INCREMENT,PRIMARY KEY (`nid`),UNIQUE KEY `uk_configtagrelation_configidtag` (`id`,`tag_name`,`tag_type`),KEY `idx_tenant_id` (`tenant_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_tag_relation';CREATE TABLE IF NOT EXISTS `group_capacity` (`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '主键ID',`group_id` varchar(128) NOT NULL DEFAULT '' COMMENT 'Group ID,空字符表示整个集群',`quota` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '配额,0表示使用默认值',`usage` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '使用量',`max_size` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '单个配置大小上限,单位为字节,0表示使用默认值',`max_aggr_count` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '聚合子配置最大个数,,0表示使用默认值',`max_aggr_size` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '单个聚合数据的子配置大小上限,单位为字节,0表示使用默认值',`max_history_count` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '最大变更历史数量',`gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',`gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间',PRIMARY KEY (`id`),UNIQUE KEY `uk_group_id` (`group_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='集群、各Group容量信息表';CREATE TABLE IF NOT EXISTS `his_config_info` (`id` bigint(64) unsigned NOT NULL,`nid` bigint(20) unsigned NOT NULL AUTO_INCREMENT,`data_id` varchar(255) NOT NULL,`group_id` varchar(128) NOT NULL,`app_name` varchar(128) DEFAULT NULL COMMENT 'app_name',`content` longtext NOT NULL,`md5` varchar(32) DEFAULT NULL,`gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,`gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,`src_user` text,`src_ip` varchar(50) DEFAULT NULL,`op_type` char(10) DEFAULT NULL,`tenant_id` varchar(128) DEFAULT '' COMMENT '租户字段',PRIMARY KEY (`nid`),KEY `idx_gmt_create` (`gmt_create`),KEY `idx_gmt_modified` (`gmt_modified`),KEY `idx_did` (`data_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='多租户改造';CREATE TABLE IF NOT EXISTS `tenant_capacity` (`id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '主键ID',`tenant_id` varchar(128) NOT NULL DEFAULT '' COMMENT 'Tenant ID',`quota` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '配额,0表示使用默认值',`usage` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '使用量',`max_size` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '单个配置大小上限,单位为字节,0表示使用默认值',`max_aggr_count` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '聚合子配置最大个数',`max_aggr_size` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '单个聚合数据的子配置大小上限,单位为字节,0表示使用默认值',`max_history_count` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '最大变更历史数量',`gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',`gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间',PRIMARY KEY (`id`),UNIQUE KEY `uk_tenant_id` (`tenant_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='租户容量信息表';CREATE TABLE IF NOT EXISTS `tenant_info` (`id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id',`kp` varchar(128) NOT NULL COMMENT 'kp',`tenant_id` varchar(128) default '' COMMENT 'tenant_id',`tenant_name` varchar(128) default '' COMMENT 'tenant_name',`tenant_desc` varchar(256) DEFAULT NULL COMMENT 'tenant_desc',`create_source` varchar(32) DEFAULT NULL COMMENT 'create_source',`gmt_create` bigint(20) NOT NULL COMMENT '创建时间',`gmt_modified` bigint(20) NOT NULL COMMENT '修改时间',PRIMARY KEY (`id`),UNIQUE KEY `uk_tenant_info_kptenantid` (`kp`,`tenant_id`),KEY `idx_tenant_id` (`tenant_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='tenant_info';CREATE TABLE IF NOT EXISTS `users` (`username` varchar(50) NOT NULL PRIMARY KEY,`password` varchar(500) NOT NULL,`enabled` boolean NOT NULL
);CREATE TABLE IF NOT EXISTS `roles` (`username` varchar(50) NOT NULL,`role` varchar(50) NOT NULL,UNIQUE INDEX `idx_user_role` (`username` ASC, `role` ASC) USING BTREE
);CREATE TABLE IF NOT EXISTS `permissions` (`role` varchar(50) NOT NULL,`resource` varchar(255) NOT NULL,`action` varchar(8) NOT NULL,UNIQUE INDEX `uk_role_permission` (`role`,`resource`,`action`) USING BTREE
);

4. 部署nacos

Kubernetes Nacos | Nacos 官网 这里是官网api

4.1 下载nacos

仅master节点执行

下载nacos项目,部署所需的yml文件在 ./deploy里,我这会把yml文件的内容发出来,不下也行

git clone https://github.com/nacos-group/nacos-k8s.git

根据官网的内容,我这选用的是nfs+mysql的部署方式,nfs和mysql在前面都已经搭建好了

4.2 k8s部署nacos

仅master节点执行

kubectl create -f nacos-pvc-nfs.yaml

nacos-pvc-nfs.yaml

# 请阅读Wiki文章
# https://github.com/nacos-group/nacos-k8s/wiki/%E4%BD%BF%E7%94%A8peerfinder%E6%89%A9%E5%AE%B9%E6%8F%92%E4%BB%B6
---
apiVersion: v1 # service不用改
kind: Service
metadata:name: nacos-headlesslabels:app: nacos
spec:publishNotReadyAddresses: true ports:- port: 8848name: servertargetPort: 8848- port: 9848name: client-rpctargetPort: 9848- port: 9849name: raft-rpctargetPort: 9849## 兼容1.4.x版本的选举端口- port: 7848name: old-raft-rpctargetPort: 7848clusterIP: Noneselector:app: nacos
---
apiVersion: v1
kind: ConfigMap
metadata:name: nacos-cm
data:mysql.host: "mysql-pro" # 我们上面部署mysql的service的namemysql.db.name: "nacos" # 3.2里创建的数据库名、账号密码mysql.port: "3306" # mysql的service的端口mysql.user: "nacosuser"mysql.password: "123456"
---
apiVersion: apps/v1
kind: StatefulSet
metadata:name: nacos
spec:podManagementPolicy: ParallelserviceName: nacos-headlessreplicas: 3template:metadata:labels:app: nacosannotations:pod.alpha.kubernetes.io/initialized: "true"spec:affinity:podAntiAffinity:requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchExpressions:- key: "app"operator: Invalues:- nacostopologyKey: "kubernetes.io/hostname"serviceAccountName: nfs-client-provisionerinitContainers:- name: peer-finder-plugin-installimage: nacos/nacos-peer-finder-plugin:1.1imagePullPolicy: IfNotPresentvolumeMounts:- mountPath: /home/nacos/plugins/peer-findername: datasubPath: peer-findercontainers:- name: nacosimagePullPolicy: IfNotPresentimage: nacos/nacos-server:2.2.3resources:requests:memory: "2Gi"cpu: "500m"ports:- containerPort: 8848name: client-port- containerPort: 9848name: client-rpc- containerPort: 9849name: raft-rpc- containerPort: 7848name: old-raft-rpcenv:- name: NACOS_REPLICASvalue: "3"- name: SERVICE_NAMEvalue: "nacos-headless"- name: DOMAIN_NAMEvalue: "cluster.local"- name: POD_NAMESPACEvalueFrom:fieldRef:apiVersion: v1fieldPath: metadata.namespace- name: MYSQL_SERVICE_HOSTvalueFrom:configMapKeyRef:name: nacos-cmkey: mysql.host- name: MYSQL_SERVICE_DB_NAMEvalueFrom:configMapKeyRef:name: nacos-cmkey: mysql.db.name- name: MYSQL_SERVICE_PORTvalueFrom:configMapKeyRef:name: nacos-cmkey: mysql.port- name: MYSQL_SERVICE_USERvalueFrom:configMapKeyRef:name: nacos-cmkey: mysql.user- name: MYSQL_SERVICE_PASSWORDvalueFrom:configMapKeyRef:name: nacos-cmkey: mysql.password- name: SPRING_DATASOURCE_PLATFORMvalue: "mysql"- name: NACOS_SERVER_PORTvalue: "8848"- name: NACOS_APPLICATION_PORTvalue: "8848"- name: PREFER_HOST_MODEvalue: "hostname"volumeMounts:- name: datamountPath: /home/nacos/plugins/peer-findersubPath: peer-finder- name: datamountPath: /home/nacos/datasubPath: data- name: datamountPath: /home/nacos/logssubPath: logsvolumeClaimTemplates:- metadata:name: dataannotations:volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" # 创建nfs-client-provisioner时其中有一步创建的class的名称,nfs-client-provisioner会自动创建pvc上面映射的3个目录都会自动创建spec:accessModes: [ "ReadWriteMany" ]resources:requests:storage: 20Giselector:matchLabels:app: nacos

运行完后虽然我们replicas设置了3个,但是我们只有一个节点,nacos的官方默认配置是需要0.5个cpu核心2G内存,我的虚拟机资源没配那么多,所以实际只会创建1个nacos pod,其他为pending状态

4.3 通过nginx代理nacos控制台

仅master节点执行

nginx我打算用deployment创建,先在nfs共享目录下创建需要用到的目录,我这是/data/nfs你们进到你们的nfs目录下

mkdir nginx
mkdir nginx/conf
mkdir nginx/html

在nginx/conf里先准备好nginx的配置文件nginx.conf,mime.type随便下个nginx都有,你们自己拷下,文章已经够长了,没用的尽量不发,随便下个nginx把mine.type也拷到nginx/conf中

nginx.conf

worker_processes  1;events {worker_connections  1024;
}http {include       mime.types;default_type  application/octet-stream;sendfile        on;keepalive_timeout  65;#gzip  on;server {listen       80;server_name  localhost;location /nacos {proxy_pass   http://nacos-headless:8848; # nacos service的名字}location / {root   html;index  index.html index.htm; # 预留部署前端项目用,容器中的html文件夹我会映射到nfs上的nginx/html,前端项目可以直接放在这个文件夹中}error_page   500 502 503 504  /50x.html;location = /50x.html {root   html;}}}

准备好后,就可以部署k8s的nginx了,用deployment的方式部署,使用nodePort service对外开放服务

kubectl create -f kube-nginx.yaml

kube-nginx.yaml

apiVersion: apps/v1
kind: Deployment
metadata:name: nginx-deployment
spec:replicas: 2selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginximage: nginx:1.20.2 # 使用nginx镜像的1.20.2版本ports:- containerPort: 80 # 容器开放80接口volumeMounts:- name: web-nginx-configmountPath: /etc/nginx- name: web-nginx-frontmountPath: /usr/share/nginx/htmlvolumes:- name: web-nginx-confignfs: server: 192.168.233.20path: /nginx/conf- name: web-nginx-frontnfs: server: 192.168.233.20path: /nginx/html---
apiVersion: v1
kind: Service
metadata:name: nginx-service
spec:selector:app: nginxports:- port: 80 #service提供的虚拟端口,用于映射pod中容器的真实端口nodePort: 30001 # 集群对外开放的接口protocol: TCPname: nginx-in #端口的名称,用于释义targetPort: 80 #容器对外暴露的端口,如果没有指定targetPort,则默认targetPort与port相同type: NodePort # 外部访问的负载均衡器方式

nginx部署成功后,外部就能通过ip和端口访问到nacos的前端了

http://192.168.233.20:30001/nacos/index.html

http://192.168.233.21:30001/nacos/index.html

5. 部署seata

5.1 创建seata数据库

仅master节点执行

通过3.2的方法创建seata数据库、seata账号并授权,下面5.2里的configMap里数据库名还有账号密码记得改

弄好后,用navicat执行sql语句,seata语句库执行下面的语句

-- -------------------------------- The script used when storeMode is 'db' --------------------------------
-- the table to store GlobalSession data
CREATE TABLE IF NOT EXISTS `global_table`
(`xid`                       VARCHAR(128) NOT NULL,`transaction_id`            BIGINT,`status`                    TINYINT      NOT NULL,`application_id`            VARCHAR(32),`transaction_service_group` VARCHAR(32),`transaction_name`          VARCHAR(128),`timeout`                   INT,`begin_time`                BIGINT,`application_data`          VARCHAR(2000),`gmt_create`                DATETIME,`gmt_modified`              DATETIME,PRIMARY KEY (`xid`),KEY `idx_status_gmt_modified` (`status` , `gmt_modified`),KEY `idx_transaction_id` (`transaction_id`)
) ENGINE = InnoDBDEFAULT CHARSET = utf8mb4;-- the table to store BranchSession data
CREATE TABLE IF NOT EXISTS `branch_table`
(`branch_id`         BIGINT       NOT NULL,`xid`               VARCHAR(128) NOT NULL,`transaction_id`    BIGINT,`resource_group_id` VARCHAR(32),`resource_id`       VARCHAR(256),`branch_type`       VARCHAR(8),`status`            TINYINT,`client_id`         VARCHAR(64),`application_data`  VARCHAR(2000),`gmt_create`        DATETIME(6),`gmt_modified`      DATETIME(6),PRIMARY KEY (`branch_id`),KEY `idx_xid` (`xid`)
) ENGINE = InnoDBDEFAULT CHARSET = utf8mb4;-- the table to store lock data
CREATE TABLE IF NOT EXISTS `lock_table`
(`row_key`        VARCHAR(128) NOT NULL,`xid`            VARCHAR(128),`transaction_id` BIGINT,`branch_id`      BIGINT       NOT NULL,`resource_id`    VARCHAR(256),`table_name`     VARCHAR(32),`pk`             VARCHAR(36),`status`         TINYINT      NOT NULL DEFAULT '0' COMMENT '0:locked ,1:rollbacking',`gmt_create`     DATETIME,`gmt_modified`   DATETIME,PRIMARY KEY (`row_key`),KEY `idx_status` (`status`),KEY `idx_branch_id` (`branch_id`),KEY `idx_xid` (`xid`)
) ENGINE = InnoDBDEFAULT CHARSET = utf8mb4;CREATE TABLE IF NOT EXISTS `distributed_lock`
(`lock_key`       CHAR(20) NOT NULL,`lock_value`     VARCHAR(20) NOT NULL,`expire`         BIGINT,primary key (`lock_key`)
) ENGINE = InnoDBDEFAULT CHARSET = utf8mb4;INSERT INTO `distributed_lock` (lock_key, lock_value, expire) VALUES ('AsyncCommitting', ' ', 0);
INSERT INTO `distributed_lock` (lock_key, lock_value, expire) VALUES ('RetryCommitting', ' ', 0);
INSERT INTO `distributed_lock` (lock_key, lock_value, expire) VALUES ('RetryRollbacking', ' ', 0);
INSERT INTO `distributed_lock` (lock_key, lock_value, expire) VALUES ('TxTimeoutCheck', ' ', 0);

连接seata的spring cloud服务,执行以下语句

CREATE TABLE `undo_log` (`branch_id` bigint NOT NULL COMMENT 'branch transaction id',`xid` varchar(128) NOT NULL COMMENT 'global transaction id',`context` varchar(128) NOT NULL COMMENT 'undo_log context,such as serialization',`rollback_info` longblob NOT NULL COMMENT 'rollback info',`log_status` int NOT NULL COMMENT '0:normal status,1:defense status',`log_created` datetime(6) NOT NULL COMMENT 'create datetime',`log_modified` datetime(6) NOT NULL COMMENT 'modify datetime',UNIQUE KEY `ux_undo_log` (`xid`,`branch_id`),KEY `ix_log_created` (`log_created`)
) ENGINE=InnoDB COMMENT='AT transaction mode undo table';

5.2 k8s部署seata

仅master节点执行

我这seata用at模式部署,注册中心、配置中心用nacos,存储方式用的mysql

kubectl create -f kube-seata.yml

kube-seata.yml

apiVersion: v1
kind: ConfigMap
metadata:name: seata-server-confignamespace: default
data:application.yml: |server:port: 7091spring:application:name: seata-serverlogging:config: classpath:logback-spring.xmlfile:path: ${user.home}/logs/seataextend:logstash-appender:destination: 127.0.0.1:4560kafka-appender:bootstrap-servers: 127.0.0.1:9092topic: logback_to_logstashconsole:user:username: seatapassword: seataseata:config:# support: nacos, consul, apollo, zk, etcd3type: nacosnacos:server-addr: nacos-headless:8848 # nacos service的名字namespace:group: SEATA_GROUPusername:password:context-path:registry:# support: nacos, eureka, redis, zk, consul, etcd3, sofatype: nacosnacos:application: seata-serverserver-addr: nacos-headless:8848group: SEATA_GROUPnamespace:cluster: defaultusername:password:context-path:store:# support: file 、 db 、 redismode: dbdb:datasource: druiddb-type: mysqldriver-class-name: com.mysql.cj.jdbc.Driverurl: jdbc:mysql://mysql-pro:3306/seata?rewriteBatchedStatements=true  # mysql service的名字user: demo2password: 123456min-conn: 10max-conn: 100global-table: global_tablebranch-table: branch_tablelock-table: lock_tabledistributed-lock-table: distributed_lockquery-limit: 1000max-wait: 5000#  server:#    service-port: 8091 #If not configured, the default is '${server.port} + 1000'security:secretKey: SeataSecretKey0c382ef121d778043159209298fd40bf3850a017tokenValidityInMilliseconds: 1800000ignore:urls: /,/**/*.css,/**/*.js,/**/*.html,/**/*.map,/**/*.svg,/**/*.png,/**/*.ico,/console-fe/public/**,/api/v1/auth/login---
apiVersion: apps/v1
kind: Deployment
metadata:name: seata-servernamespace: defaultlabels:k8s-app: seata-server
spec:replicas: 1selector:matchLabels:k8s-app: seata-servertemplate:metadata:labels:k8s-app: seata-serverspec:containers:- name: seata-serverimage: docker.io/seataio/seata-server:1.6.1imagePullPolicy: IfNotPresentports:- name: http-7091containerPort: 7091protocol: TCP- name: http-8091containerPort: 8091protocol: TCPvolumeMounts:- name: seata-configmountPath: /seata-server/resources/application.ymlsubPath: application.ymlvolumes:- name: seata-configconfigMap:name: seata-server-config
---
apiVersion: v1
kind: Service
metadata:name: seata-servernamespace: defaultlabels:k8s-app: seata-server
spec:ports:- port: 8091protocol: TCPname: seata-8091- port: 7091protocol: TCPname: seata-7091selector:k8s-app: seata-server

6. 部署java项目

我这只举个例子,你们可以按照自己的需求修改

6.1 制作cloud alibaba的docker容器

仅node节点执行

这句命令不懂的话,可以去看我上篇讲docker的文章

centos8安装docker运行java文件_centos8 docker安装java8-CSDN博客

把cloud-alibaba-gateway8000-1.0-SNAPSHOT.jar和ali-gateway.dockerfile放在同个目录,然后执行下面的语句

docker build -f ali-gateway.dockerfile -t ali-cloud-gateway:latest .

ali-gateway.dockerfile

FROM openjdk:8
RUN mkdir /usr/local/project
ADD cloud-alibaba-gateway8000-1.0-SNAPSHOT.jar  /usr/local/project/cloud-alibaba-gateway8000-1.0-SNAPSHOT.jar
EXPOSE 8000
ENTRYPOINT ["java","-jar","/usr/local/project/cloud-alibaba-gateway8000-1.0-SNAPSHOT.jar"]

运行后查看镜像是否创建成功了 

6.2 部署ali-cloud-gateway

仅master节点执行

gateway是spring cloud服务的入口,所以要用service暴露端口访问,如果部署其他项目,可以把service的部分删掉,我这省事用Deployment,正常Deployment比较适用于单项目的部署,cloud项目比较适合用statefulset部署,方便日志文件的收集,statefulset可以参照4.2部署nacos的方式修改

kubectl create -f ali-cloud-gateway.yml

ali-cloud-gateway.yml

apiVersion: apps/v1
kind: Deployment
metadata:name: ali-cloud-gateway-deployment
spec:replicas: 1selector:matchLabels:app: ali-cloud-gatewaytemplate:metadata:labels:app: ali-cloud-gatewayspec:containers:- name: ali-cloud-gatewayimage: ali-cloud-gateway:latestports:- containerPort: 8000---
apiVersion: v1
kind: Service
metadata:name: ali-cloud-gateway-svc
spec:selector:app: ali-cloud-gatewayports:- port: 8000 #service提供的虚拟端口,用于映射pod中容器的真实端口protocol: TCPname: ali-cloud-gateway-svc #端口的名称,用于释义targetPort: 8000 #容器对外暴露的端口,如果没有指定targetPort,则默认targetPort与port相同nodePort: 30005type: NodePort

通过gateway暴露的端口进行访问,成功访问到项目

6.3 部署sentinel

仅node节点执行

sentinel也就一个jar,部署方式和6.2类似,稍微提一下

docker build -f sentinel.dockerfile -t sentinel:1.8.6 .

sentinel.dockerfile

FROM openjdk:8
RUN mkdir /usr/local/project
ADD sentinel-dashboard-1.8.6.jar  /usr/local/project/sentinel-dashboard-1.8.6.jar
EXPOSE 8080
EXPOSE 8719
ENTRYPOINT ["java","-Dserver.port=8080","-Dcsp.sentinel.dashboard.server=localhost:8080","-Dproject.name=sentinel-dashboard","-jar","/usr/local/project/sentinel-dashboard-1.8.6.jar"]

 仅master节点执行

开放sentinel前端访问的话,可以参考4.3nginx代理nacos的方法

kubectl create -f kube-sentinel.yml

kube-sentinel.yml

apiVersion: apps/v1
kind: Deployment
metadata:name: sentinel-deployment
spec:replicas: 1selector:matchLabels:app: sentineltemplate:metadata:labels:app: sentinelspec:containers:- name: sentinelimage: sentinel:1.8.6ports:- containerPort: 8080- containerPort: 8719---
apiVersion: v1
kind: Service
metadata:name: sentinel-front
spec:selector:app: sentinelports:- port: 8080 #service提供的虚拟端口,用于映射pod中容器的真实端口protocol: TCPname: sentinel-front #端口的名称,用于释义targetPort: 8080 #容器对外暴露的端口,如果没有指定targetPort,则默认targetPort与port相同- port: 8719protocol: TCPname: sentinel-service

至此部署整个项目所需的知识都在这了,其中有些重复的步骤,我有标明在哪个地方有,可以尝试着自己做做,巩固下知识

相关文章:

  • 北京网站建设多少钱?
  • 辽宁网页制作哪家好_网站建设
  • 高端品牌网站建设_汉中网站制作
  • ubuntu2204安装kvm
  • C++设计模式——Visitor访问者模式
  • MySQL索引的深入学习与应用
  • pip install “git+https://xxx“报错error: subprocess-exited-with-error
  • C++编程语言:基础设施:函数(Bjarne Stroustrup)
  • React项目中使用发布订阅模式
  • PMP--一模--解题--11-20
  • Hive是什么?
  • 缓存预热/雪崩/穿透/击穿
  • 828华为云征文 | Flexus X的力量,驱动Halo博客在云端飞驰
  • 你都学会栈和队列了赶紧手搓一个对象池吧!!!(超详细,超简单适合新手宝宝学习)
  • 跨系统环境下LabVIEW程序稳定运行
  • CSP-J 之C++常用英文缩写
  • minio的下载和springboot整合minio使用
  • Docker容器技术1——docker基本操作
  • 【399天】跃迁之路——程序员高效学习方法论探索系列(实验阶段156-2018.03.11)...
  • 4. 路由到控制器 - Laravel从零开始教程
  • Android单元测试 - 几个重要问题
  • Cookie 在前端中的实践
  • gitlab-ci配置详解(一)
  • HTTP--网络协议分层,http历史(二)
  • Javascript Math对象和Date对象常用方法详解
  • Kibana配置logstash,报表一体化
  • python3 使用 asyncio 代替线程
  • Redis 懒删除(lazy free)简史
  • 大数据与云计算学习:数据分析(二)
  • 对超线程几个不同角度的解释
  • 对话 CTO〡听神策数据 CTO 曹犟描绘数据分析行业的无限可能
  • 基于游标的分页接口实现
  • 前端存储 - localStorage
  • 使用SAX解析XML
  • 世界编程语言排行榜2008年06月(ActionScript 挺进20强)
  • 说说动画卡顿的解决方案
  • 一文看透浏览器架构
  • 智能合约开发环境搭建及Hello World合约
  • Java总结 - String - 这篇请使劲喷我
  • Linux权限管理(week1_day5)--技术流ken
  • ​Redis 实现计数器和限速器的
  • ​软考-高级-系统架构设计师教程(清华第2版)【第15章 面向服务架构设计理论与实践(P527~554)-思维导图】​
  • #pragma 指令
  • #stm32整理(一)flash读写
  • (JS基础)String 类型
  • (pt可视化)利用torch的make_grid进行张量可视化
  • (附源码)python房屋租赁管理系统 毕业设计 745613
  • (附源码)springboot 个人网页的网站 毕业设计031623
  • (附源码)springboot炼糖厂地磅全自动控制系统 毕业设计 341357
  • (含react-draggable库以及相关BUG如何解决)固定在左上方某盒子内(如按钮)添加可拖动功能,使用react hook语法实现
  • (七)微服务分布式云架构spring cloud - common-service 项目构建过程
  • (全注解开发)学习Spring-MVC的第三天
  • (十七)Flask之大型项目目录结构示例【二扣蓝图】
  • (四)Android布局类型(线性布局LinearLayout)
  • (太强大了) - Linux 性能监控、测试、优化工具
  • (一)、python程序--模拟电脑鼠走迷宫
  • (轉貼) UML中文FAQ (OO) (UML)
  • (自用)gtest单元测试