当前位置: 首页 > news >正文

云主机部署 TiDB 测试集群

环境准备

购买了4台按量付费云主机

主机名称公网ip局域网ip
tidb047.109.27.111172.16.69.205
tidb147.108.114.70172.16.69.207
tidb247.108.213.190172.16.69.206
tidb347.109.183.173172.16.69.208

tidb0 将用于部署PD server , TiDB server 和 其他监控组件。
tidb1、tidb2、tidb3 将组成TiKV server集群。

4台主机都设置了相同SSH登录账号root和相同密码,这个SSH密码除了在首次进入中控机要使用,更会在配置集群时用于与集群中其他主机通信。

文章发出时,以上所有实例已经释放。


SSH到tidb0,接下来所有操作都在这台中控机上完成,由tidb0自动完成tidb1、2、3的部署。

看一下系统情况:

[root@tidb0 ~]# hostnamectlStatic hostname: tidb0Icon name: computer-vmChassis: vmMachine ID: 20190711105006363114529432776998Boot ID: 73cbbc178f38445c96e86df65e3663caVirtualization: kvmOperating System: CentOS Linux 7 (Core)CPE OS Name: cpe:/o:centos:centos:7Kernel: Linux 3.10.0-957.21.3.el7.x86_64Architecture: x86-64

参考官网

https://docs.pingcap.com/zh/tidb/v5.4/production-deployment-using-tiup#第-2-步在中控机上部署-tiup-组件

本次也采用在线部署。部署TIDB版本为5.4.1。

在线安装

  1. 执行如下命令安装 TiUP 工具
    curl --proto ‘=https’ --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
[root@tidb0 ~]# curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh% Total    % Received % Xferd  Average Speed   Time    Time     Time  CurrentDload  Upload   Total   Spent    Left  Speed
100 5149k  100 5149k    0     0  3341k      0  0:00:01  0:00:01 --:--:-- 3341k
WARN: adding root certificate via internet: https://tiup-mirrors.pingcap.com/root.json
You can revoke this by remove /root/.tiup/bin/7b8e153f2e2d0928.root.json
Successfully set mirror to https://tiup-mirrors.pingcap.com
Detected shell: bash
Shell profile:  /root/.bash_profile
/root/.bash_profile has been modified to add tiup to PATH
open a new terminal or source /root/.bash_profile to use it
Installed path: /root/.tiup/bin/tiup
===============================================
Have a try:     tiup playground
===============================================
  1. 使用source命令重新生效环境变量,是为了能在当前会话使用 tiup工具命令
source /root/.bash_profile
  1. 验证tiup命令可用 / 确认 TiUP 工具是否安装
which tiup
  1. 安装 TiUP cluster 组件
tiup cluster

会开始下载,直到出现:
Use “tiup cluster help [command]” for more information about a command.

不管三七二十一,更新 TiUP cluster 组件至最新版本:

tiup update --self && tiup update cluster

预期输出 “Update successfully!” 字样。

  1. 验证当前 TiUP cluster 版本信息。执行如下命令查看 TiUP cluster 组件版本
[root@tidb0 ~]# tiup --binary cluster
/root/.tiup/components/cluster/v1.16.0/tiup-cluster

初始化集群拓扑文件

  • 使用命令生成一个topology.yaml文件

tiup cluster template > topology.yaml

[root@tidb0 ~]# tiup cluster template > topology.yaml
[root@tidb0 ~]# ls
topology.yaml

vim打开topology.yaml后,修改对应的ip。 其中tiflash是优化查询缓存用,测试环境时可以去掉这一项。

修改后的文件:

global:user: "tidb"ssh_port: 22deploy_dir: "/tidb-deploy"data_dir: "/tidb-data"
server_configs: {}
pd_servers:- host: 172.16.69.205tidb_servers:- host: 172.16.69.205tikv_servers:- host: 172.16.69.207- host: 172.16.69.206- host: 172.16.69.208
monitoring_servers:- host: 172.16.69.205
grafana_servers:- host: 172.16.69.205
alertmanager_servers:- host: 172.16.69.205

保存并退出.

  • 检查集群存在的潜在风险:
    命令模板:tiup cluster check ./topology.yaml --user root [-p] [-i /home/root/.ssh/gcp_rsa]

其中:

topology.yaml  就是刚才初始化的配置文件
--user root 表示通过 root 用户登录到目标主机完成集群部署,该用户需要有 ssh 到目标机器的权限,并且在目标机器有 sudo 权限。也可以用其他有 ssh 和 sudo 权限的用户完成部署。
[-i] 及 [-p] 为可选项,如果已经配置免密登录目标机,则不需填写。否则选择其一即可,[-i] 为可登录到目标机的 root 用户(或 --user 指定的其他用户)的私钥,也可使用 [-p] 交互式输入该用户的密码。

实际执行命令:

tiup cluster check ./topology.yaml --user root -p

提示输入密码就是上述连接SSH时用到的密码。

检查结果中,有很多Warn和Fail。 由于只是测试环境,就不管这些了。生产环境需要处理。

尝试自动修复集群存在的潜在风险:

tiup cluster check ./topology.yaml --apply --user root -p

修复命令可能是起不了作用的。

  • 准备部署 TiDB 集群

先看以下没有部署成功时,执行tiup cluster list的结果:

[root@tidb0 ~]# tiup cluster list
Name  User  Version  Path  PrivateKey
----  ----  -------  ----  ----------
  • 部署 TiDB 集群
tiup cluster deploy tidb-test v5.4.1 ./topology.yaml --user root -p

其中:

tidb-test 为部署的集群名称,
v5.4.1 为部署的集群版本,
topology.yaml  就是刚才初始化的配置文件
--user root 表示通过 root 用户登录到目标主机完成集群部署,该用户需要有 ssh 到目标机器的权限,并且在目标机器有 sudo 权限。也可以用其他有 ssh 和 sudo 权限的用户完成部署。
[-i] 及 [-p] 为可选项,如果已经配置免密登录目标机,则不需填写。否则选择其一即可,[-i] 为可登录到目标机的 root 用户(或 --user 指定的其他用户)的私钥,也可使用 [-p] 交互式输入该用户的密码。

提示输入密码就是上述连接SSH时用到的密码。执行后会去到集群中目标主机自动完成部署操作。

预期日志结尾输出 Cluster tidb-test deployed successfully, you can start it with command: tiup cluster start tidb-test --init
表示部署成功。

tiup cluster start tidb-test --init 就是后面用于启动集群的命令;但在此之前,可以再通过tiup cluster list看看集群情况。

[root@tidb0 ~]# tiup cluster list
Name       User  Version  Path                                            PrivateKey
----       ----  -------  ----                                            ----------
tidb-test  tidb  v5.4.1   /root/.tiup/storage/cluster/clusters/tidb-test  /root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa
  • 检查 tidb-test 集群情况
[root@tidb0 ~]# tiup cluster display tidb-test
Cluster type:       tidb
Cluster name:       tidb-test
Cluster version:    v5.4.1
Deploy user:        tidb
SSH type:           builtin
Grafana URL:        http://172.16.69.205:3000
ID                   Role          Host           Ports        OS/Arch       Status  Data Dir                      Deploy Dir
--                   ----          ----           -----        -------       ------  --------                      ----------
172.16.69.205:9093   alertmanager  172.16.69.205  9093/9094    linux/x86_64  Down    /tidb-data/alertmanager-9093  /tidb-deploy/alertmanager-9093
172.16.69.205:3000   grafana       172.16.69.205  3000         linux/x86_64  Down    -                             /tidb-deploy/grafana-3000
172.16.69.205:2379   pd            172.16.69.205  2379/2380    linux/x86_64  Down    /tidb-data/pd-2379            /tidb-deploy/pd-2379
172.16.69.205:9090   prometheus    172.16.69.205  9090/12020   linux/x86_64  Down    /tidb-data/prometheus-9090    /tidb-deploy/prometheus-9090
172.16.69.205:4000   tidb          172.16.69.205  4000/10080   linux/x86_64  Down    -                             /tidb-deploy/tidb-4000
172.16.69.206:20160  tikv          172.16.69.206  20160/20180  linux/x86_64  N/A     /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
172.16.69.207:20160  tikv          172.16.69.207  20160/20180  linux/x86_64  N/A     /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
172.16.69.208:20160  tikv          172.16.69.208  20160/20180  linux/x86_64  N/A     /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
Total nodes: 8

预期输出包括 tidb-test 集群中实例 ID、角色、主机、监听端口和状态(由于还未启动,所以状态为 Down/inactive)、目录信息。

启动集群

  1. 普通启动。可通过无密码的 root 用户登录数据库。
tiup cluster start tidb-test

预期结果输出 Started cluster tidb-test successfully,表示启动成功

  1. 安全启动。必须使用米姆登录数据库,所以启动时需要记录命令行返回的密码。自动生成的密码只会返回一次。
tiup cluster start tidb-test --init

预期结果如下,表示启动成功。

...Start 172.16.69.206 successStart 172.16.69.205 successStart 172.16.69.208 successStart 172.16.69.207 success
...
Started cluster `tidb-test` successfully
The root password of TiDB database has been changed.
The new password is: 'U719-^8@FHGM0Ln4*p'.
Copy and record it to somewhere safe, it is only displayed once, and will not be stored.
The generated password can NOT be get and shown again.

上述临时密码U719-^8@FHGM0Ln4*p就用于登录tidb。(文章发出时此密码已经不可用)

  1. 再次检查 tidb-test 集群状态时, 可以看到状态不再是Down而是Up,代表集群已经在正常运行了。
tiup cluster display tidb-test
[root@tidb0 ~]# tiup cluster display tidb-test
Cluster type:       tidb
Cluster name:       tidb-test
Cluster version:    v5.4.1
Deploy user:        tidb
SSH type:           builtin
Dashboard URL:      http://172.16.69.205:2379/dashboard
Grafana URL:        http://172.16.69.205:3000
ID                   Role          Host           Ports        OS/Arch       Status   Data Dir                      Deploy Dir
--                   ----          ----           -----        -------       ------   --------                      ----------
172.16.69.205:9093   alertmanager  172.16.69.205  9093/9094    linux/x86_64  Up       /tidb-data/alertmanager-9093  /tidb-deploy/alertmanager-9093
172.16.69.205:3000   grafana       172.16.69.205  3000         linux/x86_64  Up       -                             /tidb-deploy/grafana-3000
172.16.69.205:2379   pd            172.16.69.205  2379/2380    linux/x86_64  Up|L|UI  /tidb-data/pd-2379            /tidb-deploy/pd-2379
172.16.69.205:9090   prometheus    172.16.69.205  9090/12020   linux/x86_64  Up       /tidb-data/prometheus-9090    /tidb-deploy/prometheus-9090
172.16.69.205:4000   tidb          172.16.69.205  4000/10080   linux/x86_64  Up       -                             /tidb-deploy/tidb-4000
172.16.69.206:20160  tikv          172.16.69.206  20160/20180  linux/x86_64  Up       /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
172.16.69.207:20160  tikv          172.16.69.207  20160/20180  linux/x86_64  Up       /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
172.16.69.208:20160  tikv          172.16.69.208  20160/20180  linux/x86_64  Up       /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
Total nodes: 8

tidb的端口 4000 需要云主机上开启安全组。

  1. 简单做个连接测试
Class.forName("com.mysql.cj.jdbc.Driver");
Connection conn = DriverManager.getConnection("jdbc:mysql://47.109.27.111:4000/test?useSSL=false&allowPublicKeyRetrieval=true&serverTimezone=Asia/Shanghai","root", "U719-^8@FHGM0Ln4*p");System.out.println("获取mysql连接成功");
conn.close();

获取mysql连接成功

全部部署过程和命令

shell 6 (Build 0204)
Copyright (c) 2002 NetSarang Computer, Inc. All rights reserved.Type `help' to learn how to use Xshell prompt.
[C:\~]$ Connecting to 47.109.27.111:22...
Connection established.
To escape to local shell, press 'Ctrl+Alt+]'.WARNING! The remote SSH server rejected X11 forwarding request.
Last login: Sat Aug 17 17:27:46 2024 from 118.112.72.89Welcome to Alibaba Cloud Elastic Compute Service ![root@tidb0 ~]# hostnamectlStatic hostname: tidb0Icon name: computer-vmChassis: vmMachine ID: 20190711105006363114529432776998Boot ID: 73cbbc178f38445c96e86df65e3663caVirtualization: kvmOperating System: CentOS Linux 7 (Core)CPE OS Name: cpe:/o:centos:centos:7Kernel: Linux 3.10.0-957.21.3.el7.x86_64Architecture: x86-64
[root@tidb0 ~]# curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh% Total    % Received % Xferd  Average Speed   Time    Time     Time  CurrentDload  Upload   Total   Spent    Left  Speed
100 5149k  100 5149k    0     0  3341k      0  0:00:01  0:00:01 --:--:-- 3341k
WARN: adding root certificate via internet: https://tiup-mirrors.pingcap.com/root.json
You can revoke this by remove /root/.tiup/bin/7b8e153f2e2d0928.root.json
Successfully set mirror to https://tiup-mirrors.pingcap.com
Detected shell: bash
Shell profile:  /root/.bash_profile
/root/.bash_profile has been modified to add tiup to PATH
open a new terminal or source /root/.bash_profile to use it
Installed path: /root/.tiup/bin/tiup
===============================================
Have a try:     tiup playground
===============================================
[root@tidb0 ~]# source /root/.bash_profile
[root@tidb0 ~]# which tiup
/root/.tiup/bin/tiup
[root@tidb0 ~]# tiup cluster
Checking updates for component cluster... Timedout (after 2s)
The component `cluster` version  is not installed; downloading from repository.
download https://tiup-mirrors.pingcap.com/cluster-v1.16.0-linux-amd64.tar.gz 8.83 MiB / 8.83 MiB 100.00% 55.35 MiB/s         
Deploy a TiDB cluster for productionUsage:tiup cluster [command]Available Commands:check       Perform preflight checks for the cluster.deploy      Deploy a cluster for productionstart       Start a TiDB clusterstop        Stop a TiDB clusterrestart     Restart a TiDB clusterscale-in    Scale in a TiDB clusterscale-out   Scale out a TiDB clusterdestroy     Destroy a specified clusterclean       (EXPERIMENTAL) Cleanup a specified clusterupgrade     Upgrade a specified TiDB clusterdisplay     Display information of a TiDB clusterprune       Destroy and remove instances that is in tombstone statelist        List all clustersaudit       Show audit log of cluster operationimport      Import an exist TiDB cluster from TiDB-Ansibleedit-config Edit TiDB cluster configshow-config Show TiDB cluster configreload      Reload a TiDB cluster's config and restart if neededpatch       Replace the remote package with a specified package and restart the servicerename      Rename the clusterenable      Enable a TiDB cluster automatically at bootdisable     Disable automatic enabling of TiDB clusters at bootreplay      Replay previous operation and skip successed stepstemplate    Print topology templatetls         Enable/Disable TLS between TiDB componentsmeta        backup/restore meta informationrotatessh   rotate ssh keys on all nodeshelp        Help about any commandcompletion  Generate the autocompletion script for the specified shellFlags:-c, --concurrency int     max number of parallel tasks allowed (default 5)--format string       (EXPERIMENTAL) The format of output, available values are [default, json] (default "default")-h, --help                help for tiup--ssh string          (EXPERIMENTAL) The executor type: 'builtin', 'system', 'none'.--ssh-timeout uint    Timeout in seconds to connect host via SSH, ignored for operations that don't need an SSH connection. (default 5)-v, --version             version for tiup--wait-timeout uint   Timeout in seconds to wait for an operation to complete, ignored for operations that don't fit. (default 120)-y, --yes                 Skip all confirmations and assumes 'yes'Use "tiup cluster help [command]" for more information about a command.
[root@tidb0 ~]# tiup update --self && tiup update cluster
download https://tiup-mirrors.pingcap.com/tiup-v1.16.0-linux-amd64.tar.gz 5.03 MiB / 5.03 MiB 100.00% 36.52 MiB/s            
Updated successfully!
component cluster version v1.16.0 is already installed
Updated successfully!
[root@tidb0 ~]# tiup --binary cluster
/root/.tiup/components/cluster/v1.16.0/tiup-cluster
[root@tidb0 ~]# tiup cluster template > topology.yaml
[root@tidb0 ~]# ls
topology.yaml
[root@tidb0 ~]# vim topology.yaml 
[root@tidb0 ~]# tiup cluster check ./topology.yaml --user root -p
Input SSH password: + Detect CPU Arch Name- Detecting node 172.16.69.205 Arch info ... Done- Detecting node 172.16.69.207 Arch info ... Done- Detecting node 172.16.69.206 Arch info ... Done- Detecting node 172.16.69.208 Arch info ... Done+ Detect CPU OS Name- Detecting node 172.16.69.205 OS info ... Done- Detecting node 172.16.69.207 OS info ... Done- Detecting node 172.16.69.206 OS info ... Done- Detecting node 172.16.69.208 OS info ... Done
+ Download necessary tools- Downloading check tools for linux/amd64 ... Done
+ Collect basic system information- Getting system info of 172.16.69.205:22 ... ⠼ CopyComponent: component=insight, version=, remote=172.16.69.205:/tmp/ti...
+ Collect basic system information
+ Collect basic system information
+ Collect basic system information- Getting system info of 172.16.69.205:22 ... Done- Getting system info of 172.16.69.207:22 ... Done- Getting system info of 172.16.69.206:22 ... Done- Getting system info of 172.16.69.208:22 ... Done
+ Check time zone- Checking node 172.16.69.205 ... Done- Checking node 172.16.69.207 ... Done- Checking node 172.16.69.206 ... Done- Checking node 172.16.69.208 ... Done
+ Check system requirements- Checking node 172.16.69.205 ... ⠦ CheckSys: host=172.16.69.205 type=exist
+ Check system requirements- Checking node 172.16.69.205 ... Done
+ Check system requirements
+ Check system requirements- Checking node 172.16.69.205 ... Done
+ Check system requirements
+ Check system requirements
+ Check system requirements
+ Check system requirements
+ Check system requirements
+ Check system requirements- Checking node 172.16.69.205 ... Done- Checking node 172.16.69.207 ... Done- Checking node 172.16.69.206 ... Done- Checking node 172.16.69.208 ... Done- Checking node 172.16.69.205 ... Done- Checking node 172.16.69.205 ... Done- Checking node 172.16.69.205 ... Done- Checking node 172.16.69.205 ... Done- Checking node 172.16.69.205 ... Done- Checking node 172.16.69.207 ... Done- Checking node 172.16.69.206 ... Done- Checking node 172.16.69.208 ... Done
+ Cleanup check files- Cleanup check files on 172.16.69.205:22 ... Done- Cleanup check files on 172.16.69.207:22 ... Done- Cleanup check files on 172.16.69.206:22 ... Done- Cleanup check files on 172.16.69.208:22 ... Done
Node           Check         Result  Message
----           -----         ------  -------
172.16.69.207  timezone      Pass    time zone is the same as the first PD machine: Asia/Shanghai
172.16.69.207  memory        Pass    memory size is 8192MB
172.16.69.207  disk          Warn    mount point / does not have 'noatime' option set
172.16.69.207  selinux       Pass    SELinux is disabled
172.16.69.207  thp           Fail    THP is enabled, please disable it for best performance
172.16.69.207  service       Fail    service irqbalance is not running
172.16.69.207  cpu-cores     Pass    number of CPU cores / threads: 4
172.16.69.207  disk          Fail    mount point / does not have 'nodelalloc' option set
172.16.69.207  sysctl        Fail    net.core.somaxconn = 128, should be greater than 32768
172.16.69.207  sysctl        Fail    net.ipv4.tcp_syncookies = 1, should be 0
172.16.69.207  sysctl        Fail    fs.file-max = 763803, should be greater than 1000000
172.16.69.207  command       Fail    numactl not usable, bash: numactl: command not found
172.16.69.207  os-version    Pass    OS is CentOS Linux 7 (Core) 7.6.1810
172.16.69.207  cpu-governor  Warn    Unable to determine current CPU frequency governor policy
172.16.69.207  limits        Fail    soft limit of 'stack' for user 'tidb' is not set or too low
172.16.69.207  limits        Fail    soft limit of 'nofile' for user 'tidb' is not set or too low
172.16.69.207  limits        Fail    hard limit of 'nofile' for user 'tidb' is not set or too low
172.16.69.206  cpu-cores     Pass    number of CPU cores / threads: 4
172.16.69.206  cpu-governor  Warn    Unable to determine current CPU frequency governor policy
172.16.69.206  limits        Fail    soft limit of 'nofile' for user 'tidb' is not set or too low
172.16.69.206  limits        Fail    hard limit of 'nofile' for user 'tidb' is not set or too low
172.16.69.206  limits        Fail    soft limit of 'stack' for user 'tidb' is not set or too low
172.16.69.206  sysctl        Fail    fs.file-max = 763803, should be greater than 1000000
172.16.69.206  sysctl        Fail    net.core.somaxconn = 128, should be greater than 32768
172.16.69.206  sysctl        Fail    net.ipv4.tcp_syncookies = 1, should be 0
172.16.69.206  timezone      Pass    time zone is the same as the first PD machine: Asia/Shanghai
172.16.69.206  thp           Fail    THP is enabled, please disable it for best performance
172.16.69.206  os-version    Pass    OS is CentOS Linux 7 (Core) 7.6.1810
172.16.69.206  command       Fail    numactl not usable, bash: numactl: command not found
172.16.69.206  memory        Pass    memory size is 8192MB
172.16.69.206  disk          Fail    mount point / does not have 'nodelalloc' option set
172.16.69.206  disk          Warn    mount point / does not have 'noatime' option set
172.16.69.206  selinux       Pass    SELinux is disabled
172.16.69.206  service       Fail    service irqbalance is not running
172.16.69.208  selinux       Pass    SELinux is disabled
172.16.69.208  os-version    Pass    OS is CentOS Linux 7 (Core) 7.6.1810
172.16.69.208  cpu-cores     Pass    number of CPU cores / threads: 4
172.16.69.208  service       Fail    service irqbalance is not running
172.16.69.208  thp           Fail    THP is enabled, please disable it for best performance
172.16.69.208  command       Fail    numactl not usable, bash: numactl: command not found
172.16.69.208  cpu-governor  Warn    Unable to determine current CPU frequency governor policy
172.16.69.208  disk          Warn    mount point / does not have 'noatime' option set
172.16.69.208  limits        Fail    soft limit of 'nofile' for user 'tidb' is not set or too low
172.16.69.208  limits        Fail    hard limit of 'nofile' for user 'tidb' is not set or too low
172.16.69.208  limits        Fail    soft limit of 'stack' for user 'tidb' is not set or too low
172.16.69.208  sysctl        Fail    fs.file-max = 763803, should be greater than 1000000
172.16.69.208  sysctl        Fail    net.core.somaxconn = 128, should be greater than 32768
172.16.69.208  sysctl        Fail    net.ipv4.tcp_syncookies = 1, should be 0
172.16.69.208  timezone      Pass    time zone is the same as the first PD machine: Asia/Shanghai
172.16.69.208  memory        Pass    memory size is 8192MB
172.16.69.208  disk          Fail    mount point / does not have 'nodelalloc' option set
172.16.69.205  thp           Fail    THP is enabled, please disable it for best performance
172.16.69.205  service       Fail    service irqbalance is not running
172.16.69.205  os-version    Pass    OS is CentOS Linux 7 (Core) 7.6.1810
172.16.69.205  cpu-cores     Pass    number of CPU cores / threads: 4
172.16.69.205  cpu-governor  Warn    Unable to determine current CPU frequency governor policy
172.16.69.205  memory        Pass    memory size is 8192MB
172.16.69.205  disk          Warn    mount point / does not have 'noatime' option set
172.16.69.205  sysctl        Fail    fs.file-max = 763803, should be greater than 1000000
172.16.69.205  sysctl        Fail    net.core.somaxconn = 128, should be greater than 32768
172.16.69.205  sysctl        Fail    net.ipv4.tcp_syncookies = 1, should be 0
172.16.69.205  command       Fail    numactl not usable, bash: numactl: command not found
172.16.69.205  disk          Fail    mount point / does not have 'nodelalloc' option set
172.16.69.205  limits        Fail    soft limit of 'stack' for user 'tidb' is not set or too low
172.16.69.205  limits        Fail    soft limit of 'nofile' for user 'tidb' is not set or too low
172.16.69.205  limits        Fail    hard limit of 'nofile' for user 'tidb' is not set or too low
172.16.69.205  selinux       Pass    SELinux is disabled
[root@tidb0 ~]# tiup cluster check ./topology.yaml --apply --user root -p
Input SSH password: + Detect CPU Arch Name- Detecting node 172.16.69.205 Arch info ... Done- Detecting node 172.16.69.207 Arch info ... Done- Detecting node 172.16.69.206 Arch info ... Done- Detecting node 172.16.69.208 Arch info ... Done+ Detect CPU OS Name- Detecting node 172.16.69.205 OS info ... Done- Detecting node 172.16.69.207 OS info ... Done- Detecting node 172.16.69.206 OS info ... Done- Detecting node 172.16.69.208 OS info ... Done
+ Download necessary tools- Downloading check tools for linux/amd64 ... Done
+ Collect basic system information
+ Collect basic system information- Getting system info of 172.16.69.205:22 ... ⠴ CopyComponent: component=insight, version=, remote=172.16.69.205:/tmp/ti...
+ Collect basic system information
+ Collect basic system information- Getting system info of 172.16.69.205:22 ... Done- Getting system info of 172.16.69.207:22 ... Done- Getting system info of 172.16.69.206:22 ... Done- Getting system info of 172.16.69.208:22 ... Done
+ Check time zone- Checking node 172.16.69.205 ... Done- Checking node 172.16.69.207 ... Done- Checking node 172.16.69.206 ... Done- Checking node 172.16.69.208 ... Done
+ Check system requirements- Checking node 172.16.69.205 ... ⠦ CheckSys: host=172.16.69.205 type=exist
+ Check system requirements
+ Check system requirements
+ Check system requirements- Checking node 172.16.69.205 ... Done
+ Check system requirements- Checking node 172.16.69.205 ... Done
+ Check system requirements
+ Check system requirements
+ Check system requirements
+ Check system requirements
+ Check system requirements- Checking node 172.16.69.205 ... Done- Checking node 172.16.69.207 ... Done- Checking node 172.16.69.206 ... Done- Checking node 172.16.69.208 ... Done- Checking node 172.16.69.205 ... Done- Checking node 172.16.69.205 ... Done- Checking node 172.16.69.205 ... Done- Checking node 172.16.69.205 ... Done- Checking node 172.16.69.205 ... Done- Checking node 172.16.69.207 ... Done- Checking node 172.16.69.206 ... Done- Checking node 172.16.69.208 ... Done
+ Cleanup check files- Cleanup check files on 172.16.69.205:22 ... Done- Cleanup check files on 172.16.69.207:22 ... Done- Cleanup check files on 172.16.69.206:22 ... Done- Cleanup check files on 172.16.69.208:22 ... Done
Node           Check         Result  Message
----           -----         ------  -------
172.16.69.205  memory        Pass    memory size is 8192MB
172.16.69.205  disk          Fail    mount point / does not have 'nodelalloc' option set, auto fixing not supported
172.16.69.205  disk          Warn    mount point / does not have 'noatime' option set, auto fixing not supported
172.16.69.205  limits        Fail    will try to set 'tidb    hard    nofile    1000000'
172.16.69.205  limits        Fail    will try to set 'tidb    soft    stack    10240'
172.16.69.205  limits        Fail    will try to set 'tidb    soft    nofile    1000000'
172.16.69.205  thp           Fail    will try to disable THP, please check again after reboot
172.16.69.205  service       Fail    will try to 'start irqbalance.service'
172.16.69.205  command       Fail    numactl not usable, bash: numactl: command not found, auto fixing not supported
172.16.69.205  cpu-cores     Pass    number of CPU cores / threads: 4
172.16.69.205  cpu-governor  Warn    Unable to determine current CPU frequency governor policy, auto fixing not supported
172.16.69.205  sysctl        Fail    will try to set 'fs.file-max = 1000000'
172.16.69.205  sysctl        Fail    will try to set 'net.core.somaxconn = 32768'
172.16.69.205  sysctl        Fail    will try to set 'net.ipv4.tcp_syncookies = 0'
172.16.69.205  selinux       Pass    SELinux is disabled
172.16.69.205  os-version    Pass    OS is CentOS Linux 7 (Core) 7.6.1810
172.16.69.207  cpu-cores     Pass    number of CPU cores / threads: 4
172.16.69.207  cpu-governor  Warn    Unable to determine current CPU frequency governor policy, auto fixing not supported
172.16.69.207  memory        Pass    memory size is 8192MB
172.16.69.207  limits        Fail    will try to set 'tidb    soft    nofile    1000000'
172.16.69.207  limits        Fail    will try to set 'tidb    hard    nofile    1000000'
172.16.69.207  limits        Fail    will try to set 'tidb    soft    stack    10240'
172.16.69.207  sysctl        Fail    will try to set 'fs.file-max = 1000000'
172.16.69.207  sysctl        Fail    will try to set 'net.core.somaxconn = 32768'
172.16.69.207  sysctl        Fail    will try to set 'net.ipv4.tcp_syncookies = 0'
172.16.69.207  disk          Fail    mount point / does not have 'nodelalloc' option set, auto fixing not supported
172.16.69.207  disk          Warn    mount point / does not have 'noatime' option set, auto fixing not supported
172.16.69.207  service       Fail    will try to 'start irqbalance.service'
172.16.69.207  os-version    Pass    OS is CentOS Linux 7 (Core) 7.6.1810
172.16.69.207  thp           Fail    will try to disable THP, please check again after reboot
172.16.69.207  command       Fail    numactl not usable, bash: numactl: command not found, auto fixing not supported
172.16.69.207  timezone      Pass    time zone is the same as the first PD machine: Asia/Shanghai
172.16.69.207  selinux       Pass    SELinux is disabled
172.16.69.206  timezone      Pass    time zone is the same as the first PD machine: Asia/Shanghai
172.16.69.206  cpu-cores     Pass    number of CPU cores / threads: 4
172.16.69.206  cpu-governor  Warn    Unable to determine current CPU frequency governor policy, auto fixing not supported
172.16.69.206  limits        Fail    will try to set 'tidb    soft    nofile    1000000'
172.16.69.206  limits        Fail    will try to set 'tidb    hard    nofile    1000000'
172.16.69.206  limits        Fail    will try to set 'tidb    soft    stack    10240'
172.16.69.206  service       Fail    will try to 'start irqbalance.service'
172.16.69.206  memory        Pass    memory size is 8192MB
172.16.69.206  sysctl        Fail    will try to set 'fs.file-max = 1000000'
172.16.69.206  sysctl        Fail    will try to set 'net.core.somaxconn = 32768'
172.16.69.206  sysctl        Fail    will try to set 'net.ipv4.tcp_syncookies = 0'
172.16.69.206  thp           Fail    will try to disable THP, please check again after reboot
172.16.69.206  disk          Warn    mount point / does not have 'noatime' option set, auto fixing not supported
172.16.69.206  os-version    Pass    OS is CentOS Linux 7 (Core) 7.6.1810
172.16.69.206  disk          Fail    mount point / does not have 'nodelalloc' option set, auto fixing not supported
172.16.69.206  selinux       Pass    SELinux is disabled
172.16.69.206  command       Fail    numactl not usable, bash: numactl: command not found, auto fixing not supported
172.16.69.208  selinux       Pass    SELinux is disabled
172.16.69.208  timezone      Pass    time zone is the same as the first PD machine: Asia/Shanghai
172.16.69.208  cpu-cores     Pass    number of CPU cores / threads: 4
172.16.69.208  disk          Fail    mount point / does not have 'nodelalloc' option set, auto fixing not supported
172.16.69.208  memory        Pass    memory size is 8192MB
172.16.69.208  limits        Fail    will try to set 'tidb    soft    nofile    1000000'
172.16.69.208  limits        Fail    will try to set 'tidb    hard    nofile    1000000'
172.16.69.208  limits        Fail    will try to set 'tidb    soft    stack    10240'
172.16.69.208  os-version    Pass    OS is CentOS Linux 7 (Core) 7.6.1810
172.16.69.208  cpu-governor  Warn    Unable to determine current CPU frequency governor policy, auto fixing not supported
172.16.69.208  command       Fail    numactl not usable, bash: numactl: command not found, auto fixing not supported
172.16.69.208  service       Fail    will try to 'start irqbalance.service'
172.16.69.208  disk          Warn    mount point / does not have 'noatime' option set, auto fixing not supported
172.16.69.208  sysctl        Fail    will try to set 'fs.file-max = 1000000'
172.16.69.208  sysctl        Fail    will try to set 'net.core.somaxconn = 32768'
172.16.69.208  sysctl        Fail    will try to set 'net.ipv4.tcp_syncookies = 0'
172.16.69.208  thp           Fail    will try to disable THP, please check again after reboot
+ Try to apply changes to fix failed checks- Applying changes on 172.16.69.205 ... ⠙ Sysctl: host=172.16.69.205 net.ipv4.tcp_syncookies = 0- Applying changes on 172.16.69.207 ... ⠙ Sysctl: host=172.16.69.207 net.ipv4.tcp_syncookies = 0- Applying changes on 172.16.69.206 ... ⠙ Sysctl: host=172.16.69.206 net.ipv4.tcp_syncookies = 0
+ Try to apply changes to fix failed checks- Applying changes on 172.16.69.205 ... ⠹ Shell: host=172.16.69.205, sudo=true, command=`if [ -d /sys/kernel/mm/transpar...- Applying changes on 172.16.69.207 ... ⠹ Sysctl: host=172.16.69.207 net.ipv4.tcp_syncookies = 0- Applying changes on 172.16.69.206 ... ⠹ Shell: host=172.16.69.206, sudo=true, command=`if [ -d /sys/kernel/mm/transpar...
+ Try to apply changes to fix failed checks- Applying changes on 172.16.69.205 ... Done- Applying changes on 172.16.69.207 ... Done- Applying changes on 172.16.69.206 ... Done- Applying changes on 172.16.69.208 ... Done
[root@tidb0 ~]# tiup cluster list
Name  User  Version  Path  PrivateKey
----  ----  -------  ----  ----------
[root@tidb0 ~]# tiup cluster deploy tidb-test v5.4.1 ./topology.yaml --user root -p
Input SSH password: + Detect CPU Arch Name- Detecting node 172.16.69.205 Arch info ... Done- Detecting node 172.16.69.207 Arch info ... Done- Detecting node 172.16.69.206 Arch info ... Done- Detecting node 172.16.69.208 Arch info ... Done+ Detect CPU OS Name- Detecting node 172.16.69.205 OS info ... Done- Detecting node 172.16.69.207 OS info ... Done- Detecting node 172.16.69.206 OS info ... Done- Detecting node 172.16.69.208 OS info ... Done
Please confirm your topology:
Cluster type:    tidb
Cluster name:    tidb-test
Cluster version: v5.4.1
Role          Host           Ports        OS/Arch       Directories
----          ----           -----        -------       -----------
pd            172.16.69.205  2379/2380    linux/x86_64  /tidb-deploy/pd-2379,/tidb-data/pd-2379
tikv          172.16.69.207  20160/20180  linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tikv          172.16.69.206  20160/20180  linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tikv          172.16.69.208  20160/20180  linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tidb          172.16.69.205  4000/10080   linux/x86_64  /tidb-deploy/tidb-4000
prometheus    172.16.69.205  9090/12020   linux/x86_64  /tidb-deploy/prometheus-9090,/tidb-data/prometheus-9090
grafana       172.16.69.205  3000         linux/x86_64  /tidb-deploy/grafana-3000
alertmanager  172.16.69.205  9093/9094    linux/x86_64  /tidb-deploy/alertmanager-9093,/tidb-data/alertmanager-9093
Attention:1. If the topology is not what you expected, check your yaml file.2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]: (default=N) y
+ Generate SSH keys ... Done
+ Download TiDB components- Download pd:v5.4.1 (linux/amd64) ... Done- Download tikv:v5.4.1 (linux/amd64) ... Done- Download tidb:v5.4.1 (linux/amd64) ... Done- Download prometheus:v5.4.1 (linux/amd64) ... Done- Download grafana:v5.4.1 (linux/amd64) ... Done- Download alertmanager: (linux/amd64) ... Done- Download node_exporter: (linux/amd64) ... Done- Download blackbox_exporter: (linux/amd64) ... Done
+ Initialize target host environments- Prepare 172.16.69.205:22 ... Done- Prepare 172.16.69.207:22 ... Done- Prepare 172.16.69.206:22 ... Done- Prepare 172.16.69.208:22 ... Done
+ Deploy TiDB instance- Copy pd -> 172.16.69.205 ... Done- Copy tikv -> 172.16.69.207 ... Done- Copy tikv -> 172.16.69.206 ... Done- Copy tikv -> 172.16.69.208 ... Done- Copy tidb -> 172.16.69.205 ... Done- Copy prometheus -> 172.16.69.205 ... Done- Copy grafana -> 172.16.69.205 ... Done- Copy alertmanager -> 172.16.69.205 ... Done- Deploy node_exporter -> 172.16.69.205 ... Done- Deploy node_exporter -> 172.16.69.207 ... Done- Deploy node_exporter -> 172.16.69.206 ... Done- Deploy node_exporter -> 172.16.69.208 ... Done- Deploy blackbox_exporter -> 172.16.69.205 ... Done- Deploy blackbox_exporter -> 172.16.69.207 ... Done- Deploy blackbox_exporter -> 172.16.69.206 ... Done- Deploy blackbox_exporter -> 172.16.69.208 ... Done
+ Copy certificate to remote host
+ Init instance configs- Generate config pd -> 172.16.69.205:2379 ... Done- Generate config tikv -> 172.16.69.207:20160 ... Done- Generate config tikv -> 172.16.69.206:20160 ... Done- Generate config tikv -> 172.16.69.208:20160 ... Done- Generate config tidb -> 172.16.69.205:4000 ... Done- Generate config prometheus -> 172.16.69.205:9090 ... Done- Generate config grafana -> 172.16.69.205:3000 ... Done- Generate config alertmanager -> 172.16.69.205:9093 ... Done
+ Init monitor configs- Generate config node_exporter -> 172.16.69.206 ... Done- Generate config node_exporter -> 172.16.69.208 ... Done- Generate config node_exporter -> 172.16.69.205 ... Done- Generate config node_exporter -> 172.16.69.207 ... Done- Generate config blackbox_exporter -> 172.16.69.205 ... Done- Generate config blackbox_exporter -> 172.16.69.207 ... Done- Generate config blackbox_exporter -> 172.16.69.206 ... Done- Generate config blackbox_exporter -> 172.16.69.208 ... Done
Enabling component pdEnabling instance 172.16.69.205:2379Enable instance 172.16.69.205:2379 success
Enabling component tikvEnabling instance 172.16.69.208:20160Enabling instance 172.16.69.206:20160Enabling instance 172.16.69.207:20160Enable instance 172.16.69.206:20160 successEnable instance 172.16.69.208:20160 successEnable instance 172.16.69.207:20160 success
Enabling component tidbEnabling instance 172.16.69.205:4000Enable instance 172.16.69.205:4000 success
Enabling component prometheusEnabling instance 172.16.69.205:9090Enable instance 172.16.69.205:9090 success
Enabling component grafanaEnabling instance 172.16.69.205:3000Enable instance 172.16.69.205:3000 success
Enabling component alertmanagerEnabling instance 172.16.69.205:9093Enable instance 172.16.69.205:9093 success
Enabling component node_exporterEnabling instance 172.16.69.208Enabling instance 172.16.69.207Enabling instance 172.16.69.206Enabling instance 172.16.69.205Enable 172.16.69.205 successEnable 172.16.69.206 successEnable 172.16.69.208 successEnable 172.16.69.207 success
Enabling component blackbox_exporterEnabling instance 172.16.69.208Enabling instance 172.16.69.205Enabling instance 172.16.69.207Enabling instance 172.16.69.206Enable 172.16.69.205 successEnable 172.16.69.206 successEnable 172.16.69.207 successEnable 172.16.69.208 success
Cluster `tidb-test` deployed successfully, you can start it with command: `tiup cluster start tidb-test --init`
[root@tidb0 ~]# tiup cluster list
Name       User  Version  Path                                            PrivateKey
----       ----  -------  ----                                            ----------
tidb-test  tidb  v5.4.1   /root/.tiup/storage/cluster/clusters/tidb-test  /root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa
[root@tidb0 ~]# tiup cluster display tidb-test
Cluster type:       tidb
Cluster name:       tidb-test
Cluster version:    v5.4.1
Deploy user:        tidb
SSH type:           builtin
Grafana URL:        http://172.16.69.205:3000
ID                   Role          Host           Ports        OS/Arch       Status  Data Dir                      Deploy Dir
--                   ----          ----           -----        -------       ------  --------                      ----------
172.16.69.205:9093   alertmanager  172.16.69.205  9093/9094    linux/x86_64  Down    /tidb-data/alertmanager-9093  /tidb-deploy/alertmanager-9093
172.16.69.205:3000   grafana       172.16.69.205  3000         linux/x86_64  Down    -                             /tidb-deploy/grafana-3000
172.16.69.205:2379   pd            172.16.69.205  2379/2380    linux/x86_64  Down    /tidb-data/pd-2379            /tidb-deploy/pd-2379
172.16.69.205:9090   prometheus    172.16.69.205  9090/12020   linux/x86_64  Down    /tidb-data/prometheus-9090    /tidb-deploy/prometheus-9090
172.16.69.205:4000   tidb          172.16.69.205  4000/10080   linux/x86_64  Down    -                             /tidb-deploy/tidb-4000
172.16.69.206:20160  tikv          172.16.69.206  20160/20180  linux/x86_64  N/A     /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
172.16.69.207:20160  tikv          172.16.69.207  20160/20180  linux/x86_64  N/A     /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
172.16.69.208:20160  tikv          172.16.69.208  20160/20180  linux/x86_64  N/A     /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
Total nodes: 8
[root@tidb0 ~]# tiup cluster start tidb-test --init
Starting cluster tidb-test...
+ [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=172.16.69.205
+ [Parallel] - UserSSH: user=tidb, host=172.16.69.206
+ [Parallel] - UserSSH: user=tidb, host=172.16.69.205
+ [Parallel] - UserSSH: user=tidb, host=172.16.69.208
+ [Parallel] - UserSSH: user=tidb, host=172.16.69.205
+ [Parallel] - UserSSH: user=tidb, host=172.16.69.205
+ [Parallel] - UserSSH: user=tidb, host=172.16.69.207
+ [Parallel] - UserSSH: user=tidb, host=172.16.69.205
+ [ Serial ] - StartCluster
Starting component pdStarting instance 172.16.69.205:2379Start instance 172.16.69.205:2379 success
Starting component tikvStarting instance 172.16.69.208:20160Starting instance 172.16.69.207:20160Starting instance 172.16.69.206:20160Start instance 172.16.69.206:20160 successStart instance 172.16.69.208:20160 successStart instance 172.16.69.207:20160 success
Starting component tidbStarting instance 172.16.69.205:4000Start instance 172.16.69.205:4000 success
Starting component prometheusStarting instance 172.16.69.205:9090Start instance 172.16.69.205:9090 success
Starting component grafanaStarting instance 172.16.69.205:3000Start instance 172.16.69.205:3000 success
Starting component alertmanagerStarting instance 172.16.69.205:9093Start instance 172.16.69.205:9093 success
Starting component node_exporterStarting instance 172.16.69.207Starting instance 172.16.69.206Starting instance 172.16.69.208Starting instance 172.16.69.205Start 172.16.69.206 successStart 172.16.69.205 successStart 172.16.69.208 successStart 172.16.69.207 success
Starting component blackbox_exporterStarting instance 172.16.69.207Starting instance 172.16.69.205Starting instance 172.16.69.208Starting instance 172.16.69.206Start 172.16.69.206 successStart 172.16.69.205 successStart 172.16.69.208 successStart 172.16.69.207 success
+ [ Serial ] - UpdateTopology: cluster=tidb-test
Started cluster `tidb-test` successfully
The root password of TiDB database has been changed.
The new password is: 'U719-^8@FHGM0Ln4*p'.
Copy and record it to somewhere safe, it is only displayed once, and will not be stored.
The generated password can NOT be get and shown again.
[root@tidb0 ~]# tiup cluster display tidb-test
Cluster type:       tidb
Cluster name:       tidb-test
Cluster version:    v5.4.1
Deploy user:        tidb
SSH type:           builtin
Dashboard URL:      http://172.16.69.205:2379/dashboard
Grafana URL:        http://172.16.69.205:3000
ID                   Role          Host           Ports        OS/Arch       Status   Data Dir                      Deploy Dir
--                   ----          ----           -----        -------       ------   --------                      ----------
172.16.69.205:9093   alertmanager  172.16.69.205  9093/9094    linux/x86_64  Up       /tidb-data/alertmanager-9093  /tidb-deploy/alertmanager-9093
172.16.69.205:3000   grafana       172.16.69.205  3000         linux/x86_64  Up       -                             /tidb-deploy/grafana-3000
172.16.69.205:2379   pd            172.16.69.205  2379/2380    linux/x86_64  Up|L|UI  /tidb-data/pd-2379            /tidb-deploy/pd-2379
172.16.69.205:9090   prometheus    172.16.69.205  9090/12020   linux/x86_64  Up       /tidb-data/prometheus-9090    /tidb-deploy/prometheus-9090
172.16.69.205:4000   tidb          172.16.69.205  4000/10080   linux/x86_64  Up       -                             /tidb-deploy/tidb-4000
172.16.69.206:20160  tikv          172.16.69.206  20160/20180  linux/x86_64  Up       /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
172.16.69.207:20160  tikv          172.16.69.207  20160/20180  linux/x86_64  Up       /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
172.16.69.208:20160  tikv          172.16.69.208  20160/20180  linux/x86_64  Up       /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
Total nodes: 8
[root@tidb0 ~]# free -htotal        used        free      shared  buff/cache   available
Mem:           7.4G        605M        5.1G        508K        1.7G        6.5G
Swap:            0B          0B          0B
[root@tidb0 ~]# 

相关文章:

  • 北京网站建设多少钱?
  • 辽宁网页制作哪家好_网站建设
  • 高端品牌网站建设_汉中网站制作
  • 景联文科技:一文详解如何构建高质量SFT数据
  • java基础03——Arrays.asList与ArrayList的区别(基本概念、用法、使用场景)
  • 24/8/17算法笔记 模仿学习算法
  • Spring中AbstractAutowireCapableBeanFactory
  • Unity3D开发之OnCollisionXXX触发条件
  • Spring Boot集成Devtools实现热更新?
  • 8.15 day bug
  • 最佳薪酬管理系统盘点:9款优选推荐
  • 微信答题小程序产品研发-后端开发
  • 重复的子字符串 | LeetCode-459 | 字符串匹配 | KMP | 双指针
  • 融合创新:EasyCVR视频汇聚平台云计算技术与AI技术共筑雪亮工程智能防线
  • WEB漏洞-SQL注入之简要SQL注入
  • 零售业务产品系统应用架构设计(三)
  • 牛客网SQL 练习 一
  • 网络专线和IPsecVPN在使用上有什么区别?
  • 2017 年终总结 —— 在路上
  • Angular 2 DI - IoC DI - 1
  • Codepen 每日精选(2018-3-25)
  • go语言学习初探(一)
  • If…else
  • Java精华积累:初学者都应该搞懂的问题
  • Java-详解HashMap
  • k8s 面向应用开发者的基础命令
  • MySQL主从复制读写分离及奇怪的问题
  • ng6--错误信息小结(持续更新)
  • php面试题 汇集2
  • PV统计优化设计
  • Redux 中间件分析
  • SQLServer插入数据
  • STAR法则
  • vue从入门到进阶:计算属性computed与侦听器watch(三)
  • Web Storage相关
  • windows-nginx-https-本地配置
  • 从地狱到天堂,Node 回调向 async/await 转变
  • 开年巨制!千人千面回放技术让你“看到”Flutter用户侧问题
  • 如何优雅地使用 Sublime Text
  • 视频flv转mp4最快的几种方法(就是不用格式工厂)
  • 算法之不定期更新(一)(2018-04-12)
  • 我建了一个叫Hello World的项目
  • postgresql行列转换函数
  • 阿里云ACE认证学习知识点梳理
  • 正则表达式-基础知识Review
  • !$boo在php中什么意思,php前戏
  • #C++ 智能指针 std::unique_ptr 、std::shared_ptr 和 std::weak_ptr
  • #laravel 通过手动安装依赖PHPExcel#
  • (52)只出现一次的数字III
  • (LeetCode 49)Anagrams
  • (Redis使用系列) Springboot 使用Redis+Session实现Session共享 ,简单的单点登录 五
  • (二)七种元启发算法(DBO、LO、SWO、COA、LSO、KOA、GRO)求解无人机路径规划MATLAB
  • (二)丶RabbitMQ的六大核心
  • (附源码)流浪动物保护平台的设计与实现 毕业设计 161154
  • (十七)devops持续集成开发——使用jenkins流水线pipeline方式发布一个微服务项目
  • (四)七种元启发算法(DBO、LO、SWO、COA、LSO、KOA、GRO)求解无人机路径规划MATLAB
  • (译) 函数式 JS #1:简介
  • (转)winform之ListView