2019独角兽企业重金招聘Python工程师标准>>>
构建环境:
- 宿主机操作系统为
Windows 10 X64
,虚拟机软件为VMware WorkStation 14.0.0
,网卡IP
为192.168.195.1
,使用ShadowSocket
的端口为1080
(需要在Windows
控制面板的防火墙高级功能中放开相应端口访问权限); - 虚拟机中安装
CentOS Linux release 7.4.1708
发行版,双核3G
内存,网卡0
使用NAT
网络,IP
为192.168.195.131
,网卡1
为Host Only
网络,IP
为192.168.162.128
。
安装和配置Docker服务
安装Docker软件包
- 如果有的话,卸载旧的
Docker
,否则可能会不兼容:
$ yum remove -y docker docker-io docker-selinux python-docker-py
- 新增
Docker
的Yum
仓库:
$ vi /etc/yum.repos.d/docker.repo
[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/$releasever/
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg
- 安装
Docker
软件包:
$ yum update
$ yum install -y epel-release
$ yum install -y docker-engine docker-engine-selinux
配置国内镜像加速
- 使用阿里的
Docker
镜像服务(也可以自己去申请一个地址):
$ mkdir -p /etc/docker
$ vi /etc/docker/daemon.json
{
"registry-mirrors": ["https://7g5a4z30.mirror.aliyuncs.com"]
}
- 重启
Docker
服务:
$ systemctl daemon-reload && systemctl enable docker && systemctl restart docker && systemctl status docker
- 检查镜像服务是否正常:
$ docker run --rm hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://cloud.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/engine/userguide/
启动Registry服务
这里使用localhost
作为Registry
服务的地址,如果要想在局域网内使用,需要改成本机IP
或者一个可以被解析的名字。
- 运行
Register
容器,映射到4000
端口:
$ docker run -d --name registry --restart=always -p 4000:5000 -v /opt/registry:/var/lib/registry registry:2
- 修改
Docker
服务配置,信任本地Registry
服务:
$ vi /usr/lib/systemd/system/docker.service
...
#ExecStart=/usr/bin/dockerd
ExecStart=/usr/bin/dockerd --insecure-registry localhost:4000
...
- 重启
Docker
服务:
$ systemctl daemon-reload && systemctl restart docker
- 测试
Registry
服务是否正常:
$ curl -X GET http://localhost:4000/v2/_catalog
{"repositories":[]}
- 推送一个镜像到
Registry
服务器中:
$ docker pull centos:7
$ docker tag centos:7 localhost:4000/centos:7
- 查看推送到
Registry
的镜像是否正常:
$ curl -X GET http://localhost:4000/v2/_catalog
{"repositories":["centos"]}
安装和配置Kolla
获取Kolla源码
$ mkdir -pv /opt/kolla
$ cd /opt/kolla
$ git clone https://github.com/openstack/kolla
$ cd kolla
$ git checkout -b devel/pike remotes/origin/stable/pike
###安装依赖软件
$ pip install pyopenssl tox
$ pip install -r requirements.txt -r test-requirements.txt
生成默认配置
$ tox -e genconfig
$ mkdir -pv /etc/kolla/
$ cp -v etc/kolla/kolla-build.conf /etc/kolla/
生成Dockerfile
- 使用
Pike
版本的默认配置生成source
类型的Dockerfile
:
$ python tools/build.py -t source --template-only --work-dir=..
- 查看
Base
镜像的Dockerfile
:
$ cat ../docker/base/Dockerfile
主要看BEGIN REPO ENABLEMENT
和END REPO ENABLEMENT
之间的内容:
FROM centos:7
LABEL maintainer="Kolla Project (https://launchpad.net/kolla)" name="base" build-date="20180327"
RUN groupadd --force --gid 42401 ansible \
&& useradd -M --shell /usr/sbin/nologin --uid 42401 --gid 42401 ansible \
&& groupadd --force --gid 42402 aodh \
&& useradd -M --shell /usr/sbin/nologin --uid 42402 --gid 42402 aodh \
&& groupadd --force --gid 42403 barbican \
&& useradd -M --shell /usr/sbin/nologin --uid 42403 --gid 42403 barbican \
&& groupadd --force --gid 42404 bifrost \
&& useradd -M --shell /usr/sbin/nologin --uid 42404 --gid 42404 bifrost \
&& groupadd --force --gid 42471 blazar \
&& useradd -M --shell /usr/sbin/nologin --uid 42471 --gid 42471 blazar \
&& groupadd --force --gid 42405 ceilometer \
&& useradd -M --shell /usr/sbin/nologin --uid 42405 --gid 42405 ceilometer \
&& groupadd --force --gid 64045 ceph \
&& useradd -M --shell /usr/sbin/nologin --uid 64045 --gid 64045 ceph \
&& groupadd --force --gid 42406 chrony \
&& useradd -M --shell /usr/sbin/nologin --uid 42406 --gid 42406 chrony \
&& groupadd --force --gid 42407 cinder \
&& useradd -M --shell /usr/sbin/nologin --uid 42407 --gid 42407 cinder \
&& groupadd --force --gid 42408 cloudkitty \
&& useradd -M --shell /usr/sbin/nologin --uid 42408 --gid 42408 cloudkitty \
&& groupadd --force --gid 42409 collectd \
&& useradd -M --shell /usr/sbin/nologin --uid 42409 --gid 42409 collectd \
&& groupadd --force --gid 42410 congress \
&& useradd -M --shell /usr/sbin/nologin --uid 42410 --gid 42410 congress \
&& groupadd --force --gid 42411 designate \
&& useradd -M --shell /usr/sbin/nologin --uid 42411 --gid 42411 designate \
&& groupadd --force --gid 42464 dragonflow \
&& useradd -M --shell /usr/sbin/nologin --uid 42464 --gid 42464 dragonflow \
&& groupadd --force --gid 42466 ec2api \
&& useradd -M --shell /usr/sbin/nologin --uid 42466 --gid 42466 ec2api \
&& groupadd --force --gid 42412 elasticsearch \
&& useradd -M --shell /usr/sbin/nologin --uid 42412 --gid 42412 elasticsearch \
&& groupadd --force --gid 42413 etcd \
&& useradd -M --shell /usr/sbin/nologin --uid 42413 --gid 42413 etcd \
&& groupadd --force --gid 42474 fluentd \
&& useradd -M --shell /usr/sbin/nologin --uid 42474 --gid 42474 fluentd \
&& groupadd --force --gid 42414 freezer \
&& useradd -M --shell /usr/sbin/nologin --uid 42414 --gid 42414 freezer \
&& groupadd --force --gid 42415 glance \
&& useradd -M --shell /usr/sbin/nologin --uid 42415 --gid 42415 glance \
&& groupadd --force --gid 42416 gnocchi \
&& useradd -M --shell /usr/sbin/nologin --uid 42416 --gid 42416 gnocchi \
&& groupadd --force --gid 42417 grafana \
&& useradd -M --shell /usr/sbin/nologin --uid 42417 --gid 42417 grafana \
&& groupadd --force --gid 42454 haproxy \
&& useradd -M --shell /usr/sbin/nologin --uid 42454 --gid 42454 haproxy \
&& groupadd --force --gid 42418 heat \
&& useradd -M --shell /usr/sbin/nologin --uid 42418 --gid 42418 heat \
&& groupadd --force --gid 42420 horizon \
&& useradd -M --shell /usr/sbin/nologin --uid 42420 --gid 42420 horizon \
&& groupadd --force --gid 42421 influxdb \
&& useradd -M --shell /usr/sbin/nologin --uid 42421 --gid 42421 influxdb \
&& groupadd --force --gid 42422 ironic \
&& useradd -M --shell /usr/sbin/nologin --uid 42422 --gid 42422 ironic \
&& groupadd --force --gid 42461 ironic-inspector \
&& useradd -M --shell /usr/sbin/nologin --uid 42461 --gid 42461 ironic-inspector \
&& groupadd --force --gid 42423 kafka \
&& useradd -M --shell /usr/sbin/nologin --uid 42423 --gid 42423 kafka \
&& groupadd --force --gid 42458 karbor \
&& useradd -M --shell /usr/sbin/nologin --uid 42458 --gid 42458 karbor \
&& groupadd --force --gid 42425 keystone \
&& useradd -M --shell /usr/sbin/nologin --uid 42425 --gid 42425 keystone \
&& groupadd --force --gid 42426 kibana \
&& useradd -M --shell /usr/sbin/nologin --uid 42426 --gid 42426 kibana \
&& groupadd --force --gid 42400 kolla \
&& useradd -M --shell /usr/sbin/nologin --uid 42400 --gid 42400 kolla \
&& groupadd --force --gid 42469 kuryr \
&& useradd -M --shell /usr/sbin/nologin --uid 42469 --gid 42469 kuryr \
&& groupadd --force --gid 42473 libvirt \
&& useradd -M --shell /usr/sbin/nologin --uid 42473 --gid 42473 libvirt \
&& groupadd --force --gid 42428 magnum \
&& useradd -M --shell /usr/sbin/nologin --uid 42428 --gid 42428 magnum \
&& groupadd --force --gid 42429 manila \
&& useradd -M --shell /usr/sbin/nologin --uid 42429 --gid 42429 manila \
&& groupadd --force --gid 42457 memcached \
&& useradd -M --shell /usr/sbin/nologin --uid 42457 --gid 42457 memcached \
&& groupadd --force --gid 42430 mistral \
&& useradd -M --shell /usr/sbin/nologin --uid 42430 --gid 42430 mistral \
&& groupadd --force --gid 42431 monasca \
&& useradd -M --shell /usr/sbin/nologin --uid 42431 --gid 42431 monasca \
&& groupadd --force --gid 65534 mongodb \
&& useradd -M --shell /usr/sbin/nologin --uid 42432 --gid 65534 mongodb \
&& groupadd --force --gid 42433 murano \
&& useradd -M --shell /usr/sbin/nologin --uid 42433 --gid 42433 murano \
&& groupadd --force --gid 42434 mysql \
&& useradd -M --shell /usr/sbin/nologin --uid 42434 --gid 42434 mysql \
&& groupadd --force --gid 42435 neutron \
&& useradd -M --shell /usr/sbin/nologin --uid 42435 --gid 42435 neutron \
&& groupadd --force --gid 42436 nova \
&& useradd -M --shell /usr/sbin/nologin --uid 42436 --gid 42436 nova \
&& groupadd --force --gid 42470 novajoin \
&& useradd -M --shell /usr/sbin/nologin --uid 42470 --gid 42470 novajoin \
&& groupadd --force --gid 42437 octavia \
&& useradd -M --shell /usr/sbin/nologin --uid 42437 --gid 42437 octavia \
&& groupadd --force --gid 42462 odl \
&& useradd -M --shell /usr/sbin/nologin --uid 42462 --gid 42462 odl \
&& groupadd --force --gid 42438 panko \
&& useradd -M --shell /usr/sbin/nologin --uid 42438 --gid 42438 panko \
&& groupadd --force --gid 42472 prometheus \
&& useradd -M --shell /usr/sbin/nologin --uid 42472 --gid 42472 prometheus \
&& groupadd --force --gid 42465 qdrouterd \
&& useradd -M --shell /usr/sbin/nologin --uid 42465 --gid 42465 qdrouterd \
&& groupadd --force --gid 42427 qemu \
&& useradd -M --shell /usr/sbin/nologin --uid 42427 --gid 42427 qemu \
&& groupadd --force --gid 42439 rabbitmq \
&& useradd -M --shell /usr/sbin/nologin --uid 42439 --gid 42439 rabbitmq \
&& groupadd --force --gid 42440 rally \
&& useradd -M --shell /usr/sbin/nologin --uid 42440 --gid 42440 rally \
&& groupadd --force --gid 42460 redis \
&& useradd -M --shell /usr/sbin/nologin --uid 42460 --gid 42460 redis \
&& groupadd --force --gid 42441 sahara \
&& useradd -M --shell /usr/sbin/nologin --uid 42441 --gid 42441 sahara \
&& groupadd --force --gid 42442 searchlight \
&& useradd -M --shell /usr/sbin/nologin --uid 42442 --gid 42442 searchlight \
&& groupadd --force --gid 42443 senlin \
&& useradd -M --shell /usr/sbin/nologin --uid 42443 --gid 42443 senlin \
&& groupadd --force --gid 42467 sensu \
&& useradd -M --shell /usr/sbin/nologin --uid 42467 --gid 42467 sensu \
&& groupadd --force --gid 42468 skydive \
&& useradd -M --shell /usr/sbin/nologin --uid 42468 --gid 42468 skydive \
&& groupadd --force --gid 42444 solum \
&& useradd -M --shell /usr/sbin/nologin --uid 42444 --gid 42444 solum \
&& groupadd --force --gid 42445 swift \
&& useradd -M --shell /usr/sbin/nologin --uid 42445 --gid 42445 swift \
&& groupadd --force --gid 42446 tacker \
&& useradd -M --shell /usr/sbin/nologin --uid 42446 --gid 42446 tacker \
&& groupadd --force --gid 42447 td-agent \
&& useradd -M --shell /usr/sbin/nologin --uid 42447 --gid 42447 td-agent \
&& groupadd --force --gid 42448 telegraf \
&& useradd -M --shell /usr/sbin/nologin --uid 42448 --gid 42448 telegraf \
&& groupadd --force --gid 42449 trove \
&& useradd -M --shell /usr/sbin/nologin --uid 42449 --gid 42449 trove \
&& groupadd --force --gid 42459 vitrage \
&& useradd -M --shell /usr/sbin/nologin --uid 42459 --gid 42459 vitrage \
&& groupadd --force --gid 42450 vmtp \
&& useradd -M --shell /usr/sbin/nologin --uid 42450 --gid 42450 vmtp \
&& groupadd --force --gid 42451 watcher \
&& useradd -M --shell /usr/sbin/nologin --uid 42451 --gid 42451 watcher \
&& groupadd --force --gid 42452 zaqar \
&& useradd -M --shell /usr/sbin/nologin --uid 42452 --gid 42452 zaqar \
&& groupadd --force --gid 42453 zookeeper \
&& useradd -M --shell /usr/sbin/nologin --uid 42453 --gid 42453 zookeeper \
&& groupadd --force --gid 42463 zun \
&& useradd -M --shell /usr/sbin/nologin --uid 42463 --gid 42463 zun
LABEL kolla_version="5.0.2"
ENV KOLLA_BASE_DISTRO=centos \
KOLLA_INSTALL_TYPE=source \
KOLLA_INSTALL_METATYPE=mixed
#### Customize PS1 to be used with bash shell
COPY kolla_bashrc /tmp/
RUN cat /tmp/kolla_bashrc >> /etc/skel/.bashrc \
&& cat /tmp/kolla_bashrc >> /root/.bashrc
# PS1 var when used /bin/sh shell
ENV PS1="$(tput bold)($(printenv KOLLA_SERVICE_NAME))$(tput sgr0)[$(id -un)@$(hostname -s) $(pwd)]$ "
# For RPM Variants, enable the correct repositories - this should all be done
# in the base image so repos are consistent throughout the system. This also
# enables to provide repo overrides at a later date in a simple fashion if we
# desire such functionality. I think we will :)
RUN CURRENT_DISTRO_RELEASE=$(awk '{match($0, /[0-9]+/,version)}END{print version[0]}' /etc/system-release); \
if [ $CURRENT_DISTRO_RELEASE != "7" ]; then \
echo "Only release '7' is supported on centos"; false; \
fi \
&& cat /tmp/kolla_bashrc >> /etc/bashrc \
&& sed -i 's|^\(override_install_langs=.*\)|# \1|' /etc/yum.conf
COPY yum.conf /etc/yum.conf
#### BEGIN REPO ENABLEMENT
COPY elasticsearch.repo /etc/yum.repos.d/elasticsearch.repo
COPY grafana.repo /etc/yum.repos.d/grafana.repo
COPY influxdb.repo /etc/yum.repos.d/influxdb.repo
COPY kibana.yum.repo /etc/yum.repos.d/kibana.yum.repo
COPY MariaDB.repo /etc/yum.repos.d/MariaDB.repo
COPY opendaylight.repo /etc/yum.repos.d/opendaylight.repo
COPY td.repo /etc/yum.repos.d/td.repo
COPY zookeeper.repo /etc/yum.repos.d/zookeeper.repo
RUN yum -y install http://repo.percona.com/release/7/RPMS/x86_64/percona-release-0.1-4.noarch.rpm && yum clean all
RUN rpm --import http://yum.mariadb.org/RPM-GPG-KEY-MariaDB \
&& rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-Percona \
&& rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch \
&& rpm --import https://repos.influxdata.com/influxdb.key \
&& rpm --import https://packagecloud.io/gpg.key \
&& rpm --import https://grafanarel.s3.amazonaws.com/RPM-GPG-KEY-grafana \
&& rpm --import https://packages.treasuredata.com/GPG-KEY-td-agent
RUN rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
RUN yum -y install epel-release yum-plugin-priorities centos-release-ceph-jewel centos-release-openstack-pike centos-release-opstools centos-release-qemu-ev && yum clean all
RUN rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-OpsTools \
&& rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage \
&& rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Virtualization \
&& yum clean all
#### END REPO ENABLEMENT
# Update packages
RUN yum -y install curl iproute iscsi-initiator-utils lvm2 scsi-target-utils sudo tar which && yum clean all
COPY set_configs.py /usr/local/bin/kolla_set_configs
COPY start.sh /usr/local/bin/kolla_start
COPY sudoers /etc/sudoers
COPY curlrc /root/.curlrc
RUN curl -sSL https://github.com/Yelp/dumb-init/releases/download/v1.1.3/dumb-init_1.1.3_amd64 -o /usr/local/bin/dumb-init \
&& chmod +x /usr/local/bin/dumb-init \
&& sed -i 's|#!|#!/usr/local/bin/dumb-init |' /usr/local/bin/kolla_start
RUN touch /usr/local/bin/kolla_extend_start \
&& chmod 755 /usr/local/bin/kolla_start /usr/local/bin/kolla_extend_start /usr/local/bin/kolla_set_configs \
&& chmod 440 /etc/sudoers \
&& mkdir -p /var/log/kolla \
&& chown :kolla /var/log/kolla \
&& chmod 2775 /var/log/kolla \
&& rm -f /tmp/kolla_bashrc
CMD ["kolla_start"]
创建本地Yum仓库
配置同步镜像
- 启动一个用于同步远端仓库的
CentOS
容器:
$ mkdir -pv /opt/yum/repo
$ docker run -it --name yum-sync -v /opt/:/opt/ centos:7 /bin/bash
- 配置
Yum
的HTTP
和HTTPS
代理(原因你懂的):
$ vi ~/set_proxy.sh
#!/bin/bash
export http_proxy=192.168.195.1:1080; export https_proxy=$http_proxy
$ chmod a+x ~/set_proxy.sh
$ . ~/set_proxy.sh
- 进入
Base
镜像Dockerfile
目录:
$ cd /opt/kolla/docker/base
- 配置默认
Yum
,保留缓存:
$ cp -v yum.conf /etc/yum.conf
$ vi /etc/yum.conf
[main]
keepcache=1
cachedir=/var/yum/$basearch/$releasever
logfile=/var/log/yum.log
exactarch=1
obsoletes=1
gpgcheck=1
plugins=1
installonly_limit=0
skip_missing_names_on_install=False
- 配置
Yun
源和Key
:
$ cp -v elasticsearch.repo grafana.repo influxdb.repo kibana.yum.repo MariaDB.repo opendaylight.repo td.repo zookeeper.repo /etc/yum.repos.d/
$ yum -y install http://repo.percona.com/release/7/RPMS/x86_64/percona-release-0.1-4.noarch.rpm
$ yum -y install epel-release yum-plugin-priorities centos-release-ceph-jewel centos-release-openstack-pike centos-release-opstools centos-release-qemu-ev
$ rpm --import http://yum.mariadb.org/RPM-GPG-KEY-MariaDB \
&& rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-Percona \
&& rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch \
&& rpm --import https://repos.influxdata.com/influxdb.key \
&& rpm --import https://packagecloud.io/gpg.key \
&& rpm --import https://grafanarel.s3.amazonaws.com/RPM-GPG-KEY-grafana \
&& rpm --import https://packages.treasuredata.com/GPG-KEY-td-agent \
&& rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 \
&& rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-OpsTools \
&& rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage \
&& rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Virtualization
- 获取国内
CentOS
镜像源配置文件:
$ yum install -y wget curl git
$ cd /etc/yum.repos.d/
$ wget -O CentOS-Base.repo https://lug.ustc.edu.cn/wiki/_export/code/mirrors/help/centos?codeblock=3
- 创建
Yum
元数据缓存:
$ yum makecache
- 安装创建
Yum
仓库的工具:
$ yum install -y createrepo
- 生成
Yum
仓库同步Docker
镜像,并删除容器:
$ docker commit -m "openstack pike base yum sync." -a "LastRitter<lastritter@gmail.com>" yum-sync yum-sync:pike
$ docker rm yum-sync
- 导出
Yum
仓库同步Docker
镜像(可选):
$ docker save -o openstack_pike_yum_sync_`date +%Y-%m-%d`.tar.gz yum-sync:pike
- 导入
Yum
仓库同步Docker
镜像(可选):
$ docker load --input openstack_pike_yum_sync_`date +%Y-%m-%d`.tar.gz
同步远程仓库
- 启动同步镜像,并初设置代理服务器:
$ docker run --rm -it -v /opt:/opt/ yum-sync:pike /bin/bash
$ . ~/set_proxy.sh
- 同步所有远程仓库:
$ reposync -p /opt/yum/repo/
- 单独同步远程仓库(可以启动多个容器同时同步不同仓库):
$ reposync -p /opt/yum/repo/ --repoid=base
$ reposync -p /opt/yum/repo/ --repoid=updates
$ reposync -p /opt/yum/repo/ --repoid=extras
$ reposync -p /opt/yum/repo/ --repoid=epel
$ reposync -p /opt/yum/repo/ --repoid=centos-ceph-jewel
$ reposync -p /opt/yum/repo/ --repoid=centos-openstack-pike
$ reposync -p /opt/yum/repo/ --repoid=centos-opstools-release
$ reposync -p /opt/yum/repo/ --repoid=centos-qemu-ev
$ reposync -p /opt/yum/repo/ --repoid=elasticsearch-2.x
$ reposync -p /opt/yum/repo/ --repoid=grafana
$ reposync -p /opt/yum/repo/ --repoid=influxdb
$ reposync -p /opt/yum/repo/ --repoid=kibana-4.6
$ reposync -p /opt/yum/repo/ --repoid=mariadb
$ reposync -p /opt/yum/repo/ --repoid=opendaylight
$ reposync -p /opt/yum/repo/ --repoid=percona-release-x86_64
$ reposync -p /opt/yum/repo/ --repoid=percona-release-noarch
$ reposync -p /opt/yum/repo/ --repoid=treasuredata
$ reposync -p /opt/yum/repo/ --repoid=iwienand-zookeeper-el7
- 创建本地
Yum
仓库软件包索引文件:
$ ls /opt/yum/repo/ | xargs -I {} createrepo -p /opt/yum/repo/{}
- 备份已同步的软件包(可选):
$ cd /opt/yum/repo/
$ ls | xargs -I {} tar cJvf /path/to/backup/yum_repo_{}_`date +%Y-%m-%d`.tar.xz {}
备份仓库配置
- 启动同步镜像,并进入
Kolla
的Base
镜像源码目录:
$ docker run --rm -it -v /opt:/opt/ yum-sync:pike /bin/bash
$ mkdir -pv /opt/kolla/kolla/docker/base/cache
$ cd /opt/kolla/kolla/docker/base/cache
- 保存
Repo
文件:
$ mkdir repo
$ cp -v /etc/yum.repos.d/* repo/
- 保存
RPM
文件:
$ mkdir rpms
$ cd rpms
$ wget http://repo.percona.com/release/7/RPMS/x86_64/percona-release-0.1-4.noarch.rpm
$ cp -v /var/yum/x86_64/7/{extras/packages/{epel-release-*,centos-release-*}.rpm,base/packages/yum-plugin-priorities-*.rpm} .
$ cd ..
- 保存
Key
文件:
$ mkdirkeys
$ cd keys
$ wget http://yum.mariadb.org/RPM-GPG-KEY-MariaDB \
&& wget https://packages.elastic.co/GPG-KEY-elasticsearch \
&& wget https://repos.influxdata.com/influxdb.key \
&& wget https://packagecloud.io/gpg.key \
&& wget https://grafanarel.s3.amazonaws.com/RPM-GPG-KEY-grafana \
&& wget https://packages.treasuredata.com/GPG-KEY-td-agent
$ cd ..
启动本地仓库
- 启动一个
Nginx
服务,暴露10022
端口,提供Yum
仓库服务:
$ docker run --name=yum-server --restart=always -d -p 10022:80 -v /opt/yum/repo:/usr/share/nginx/html nginx
- 测试
Web
库服务是否正常:
$ curl -X GET http://192.168.195.131:10022/base/repodata/repomd.xml
<?xml version="1.0" encoding="UTF-8"?>
<repomd xmlns="http://linux.duke.edu/metadata/repo" xmlns:rpm="http://linux.duke.edu/metadata/rpm">
<revision>1522137310</revision>
<data type="filelists">
<checksum type="sha256">c1561546c684bd06b3a499c2babc35c761b37b2fc331677eca12f0c769b1bb37</checksum>
<open-checksum type="sha256">99513068b73d614e3d76f22b892fe62bee6af26ed5640d70cb3744e8c57045b5</open-checksum>
<location href="repodata/c1561546c684bd06b3a499c2babc35c761b37b2fc331677eca12f0c769b1bb37-filelists.xml.gz"/>
<timestamp>1522137364</timestamp>
<size>6936336</size>
<open-size>97041754</open-size>
</data>
<data type="primary">
<checksum type="sha256">1ce4baf2de7b0c88dced853cf47e70788cc69dc1db6b8f4be5d0d04b8690a488</checksum>
<open-checksum type="sha256">7ab3d5121dd6c296665850a210f1258c857ddc20cdfa8990cab9ccf34acc12f8</open-checksum>
<location href="repodata/1ce4baf2de7b0c88dced853cf47e70788cc69dc1db6b8f4be5d0d04b8690a488-primary.xml.gz"/>
<timestamp>1522137364</timestamp>
<size>2831814</size>
<open-size>26353155</open-size>
</data>
<data type="primary_db">
<checksum type="sha256">befe5add1fa3a44783fccf25fef6a787a81bcbdca4f19417cfe16e66c5e7f26b</checksum>
<open-checksum type="sha256">938764645340e4863b503902c10ca326610c430c5e606c5a99461e890713e131</open-checksum>
<location href="repodata/befe5add1fa3a44783fccf25fef6a787a81bcbdca4f19417cfe16e66c5e7f26b-primary.sqlite.bz2"/>
<timestamp>1522137381</timestamp>
<database_version>10</database_version>
<size>6025221</size>
<open-size>29564928</open-size>
</data>
<data type="other_db">
<checksum type="sha256">cf0cc856d46b3095106da78256fb28f9d8defea4118d0e75eab07dc53b7d3f0d</checksum>
<open-checksum type="sha256">dbb8218b01cc5d8159c7996cf2aa574aa881d837713f8fae06849b13d14d78a1</open-checksum>
<location href="repodata/cf0cc856d46b3095106da78256fb28f9d8defea4118d0e75eab07dc53b7d3f0d-other.sqlite.bz2"/>
<timestamp>1522137367</timestamp>
<database_version>10</database_version>
<size>2579184</size>
<open-size>18237440</open-size>
</data>
<data type="other">
<checksum type="sha256">a0af68e1057f6b03a36894d3a4f267bbe0590327423d0005d95566fb58cd7a29</checksum>
<open-checksum type="sha256">967f79ee76ebc7bfe82d74e5aa20403751454f93a5d51ed26f3118e6fda29425</open-checksum>
<location href="repodata/a0af68e1057f6b03a36894d3a4f267bbe0590327423d0005d95566fb58cd7a29-other.xml.gz"/>
<timestamp>1522137364</timestamp>
<size>1564207</size>
<open-size>19593459</open-size>
</data>
<data type="filelists_db">
<checksum type="sha256">6cd606547d4f569538d4090e9accdc3c69964de1116b9ab1e0a7864bb1f3ec98</checksum>
<open-checksum type="sha256">8135f93597ef335a32817b598b45d9f48a1f10271d0ae4263c2860092aab8cba</open-checksum>
<location href="repodata/6cd606547d4f569538d4090e9accdc3c69964de1116b9ab1e0a7864bb1f3ec98-filelists.sqlite.bz2"/>
<timestamp>1522137376</timestamp>
<database_version>10</database_version>
<size>7019993</size>
<open-size>45116416</open-size>
</data>
</repomd>
使用本地仓库
- 复制之前保存的
Repo
配置文件:
$ cd /opt/kolla/kolla/docker/base/cache
$ cp -rv repo local
- 修改软件仓库基地址为
yum_local_repo_url_base
,在使用时替换成真实地址:
--- a/docker/base/cache/local/CentOS-Base.repo
+++ b/docker/base/cache/local/CentOS-Base.repo
@@ -13,7 +13,8 @@
[base]
name=CentOS-$releasever - Base
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os
-baseurl=http://mirrors.ustc.edu.cn/centos/$releasever/os/$basearch/
+#baseurl=http://mirrors.ustc.edu.cn/centos/$releasever/os/$basearch/
+baseurl=http://yum_local_repo_url_base/base/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
@@ -21,7 +22,8 @@ gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
[updates]
name=CentOS-$releasever - Updates
# mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates
-baseurl=http://mirrors.ustc.edu.cn/centos/$releasever/updates/$basearch/
+#baseurl=http://mirrors.ustc.edu.cn/centos/$releasever/updates/$basearch/
+baseurl=http://yum_local_repo_url_base/updates/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
@@ -29,7 +31,8 @@ gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
[extras]
name=CentOS-$releasever - Extras
# mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras
-baseurl=http://mirrors.ustc.edu.cn/centos/$releasever/extras/$basearch/
+#baseurl=http://mirrors.ustc.edu.cn/centos/$releasever/extras/$basearch/
+baseurl=http://yum_local_repo_url_base/extras/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
@@ -40,4 +43,4 @@ name=CentOS-$releasever - Plus
baseurl=http://mirrors.ustc.edu.cn/centos/$releasever/centosplus/$basearch/
gpgcheck=1
enabled=0
-gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
\ No newline at end of file
+gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
--- a/docker/base/cache/local/CentOS-Ceph-Jewel.repo
+++ b/docker/base/cache/local/CentOS-Ceph-Jewel.repo
@@ -5,7 +5,8 @@
[centos-ceph-jewel]
name=CentOS-$releasever - Ceph Jewel
-baseurl=http://mirror.centos.org/centos/$releasever/storage/$basearch/ceph-jewel/
+#baseurl=http://mirror.centos.org/centos/$releasever/storage/$basearch/ceph-jewel/
+baseurl=http://yum_local_repo_url_base/centos-ceph-jewel/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage
--- a/docker/base/cache/local/CentOS-OpenStack-pike.repo
+++ b/docker/base/cache/local/CentOS-OpenStack-pike.repo
@@ -5,7 +5,8 @@
[centos-openstack-pike]
name=CentOS-7 - OpenStack pike
-baseurl=http://mirror.centos.org/centos/7/cloud/$basearch/openstack-pike/
+#baseurl=http://mirror.centos.org/centos/7/cloud/$basearch/openstack-pike/
+baseurl=http://yum_local_repo_url_base/centos-openstack-pike/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Cloud
--- a/docker/base/cache/local/CentOS-OpsTools.repo
+++ b/docker/base/cache/local/CentOS-OpsTools.repo
@@ -11,7 +11,8 @@ enabled=0
[centos-opstools-release]
name=CentOS-7 - OpsTools - release
-baseurl=http://mirror.centos.org/centos/$releasever/opstools/$basearch/
+#baseurl=http://mirror.centos.org/centos/$releasever/opstools/$basearch/
+baseurl=http://yum_local_repo_url_base/centos-opstools-release/
gpgcheck=1
enabled=1
skip_if_unavailable=1
--- a/docker/base/cache/local/CentOS-QEMU-EV.repo
+++ b/docker/base/cache/local/CentOS-QEMU-EV.repo
@@ -5,7 +5,8 @@
[centos-qemu-ev]
name=CentOS-$releasever - QEMU EV
-baseurl=http://mirror.centos.org/centos/$releasever/virt/$basearch/kvm-common/
+#baseurl=http://mirror.centos.org/centos/$releasever/virt/$basearch/kvm-common/
+baseurl=http://yum_local_repo_url_base/centos-qemu-ev/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Virtualization
--- a/docker/base/cache/local/MariaDB.repo
+++ b/docker/base/cache/local/MariaDB.repo
@@ -1,5 +1,6 @@
[mariadb]
name = MariaDB
-baseurl = https://yum.mariadb.org/10.0/centos7-amd64
+#baseurl = https://yum.mariadb.org/10.0/centos7-amd64
+baseurl=http://yum_local_repo_url_base/mariadb/
gpgkey = https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck = 1
--- a/docker/base/cache/local/elasticsearch.repo
+++ b/docker/base/cache/local/elasticsearch.repo
@@ -1,6 +1,7 @@
[elasticsearch-2.x]
name=Elasticsearch repository for 2.x packages
-baseurl=https://packages.elastic.co/elasticsearch/2.x/centos
+#baseurl=https://packages.elastic.co/elasticsearch/2.x/centos
+baseurl=http://yum_local_repo_url_base/elasticsearch-2.x/
gpgcheck=1
gpgkey=https://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1
--- a/docker/base/cache/local/epel.repo
+++ b/docker/base/cache/local/epel.repo
@@ -1,7 +1,8 @@
[epel]
name=Extra Packages for Enterprise Linux 7 - $basearch
#baseurl=http://download.fedoraproject.org/pub/epel/7/$basearch
-metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch
+#metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch
+baseurl=http://yum_local_repo_url_base/epel/
failovermethod=priority
enabled=1
gpgcheck=1
--- a/docker/base/cache/local/grafana.repo
+++ b/docker/base/cache/local/grafana.repo
@@ -1,7 +1,8 @@
[grafana]
name=grafana
-baseurl=https://packagecloud.io/grafana/stable/el/7/$basearch
-repo_gpgcheck=1
+#baseurl=https://packagecloud.io/grafana/stable/el/7/$basearch
+baseurl=http://yum_local_repo_url_base/grafana/
+#repo_gpgcheck=1
enabled=1
gpgcheck=1
gpgkey=https://packagecloud.io/gpg.key https://grafanarel.s3.amazonaws.com/RPM-GPG-KEY-grafana
--- a/docker/base/cache/local/influxdb.repo
+++ b/docker/base/cache/local/influxdb.repo
@@ -1,6 +1,7 @@
[influxdb]
name = InfluxDB Repository - RHEL $releasever
-baseurl = https://repos.influxdata.com/rhel/$releasever/$basearch/stable
+#baseurl = https://repos.influxdata.com/rhel/$releasever/$basearch/stable
+baseurl=http://yum_local_repo_url_base/influxdb/
enabled = 1
gpgcheck = 1
gpgkey = https://repos.influxdata.com/influxdb.key
--- a/docker/base/cache/local/kibana.yum.repo
+++ b/docker/base/cache/local/kibana.yum.repo
@@ -1,6 +1,7 @@
[kibana-4.6]
name=Kibana repository for 4.6.x packages
-baseurl=https://packages.elastic.co/kibana/4.6/centos
+#baseurl=https://packages.elastic.co/kibana/4.6/centos
+baseurl=http://yum_local_repo_url_base/kibana-4.6/
gpgcheck=1
gpgkey=https://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1
--- a/docker/base/cache/local/opendaylight.repo
+++ b/docker/base/cache/local/opendaylight.repo
@@ -1,5 +1,6 @@
[opendaylight]
name=CentOS CBS OpenDaylight Release Repository
-baseurl=http://cbs.centos.org/repos/nfv7-opendaylight-6-release/x86_64/os/
+#baseurl=http://cbs.centos.org/repos/nfv7-opendaylight-6-release/x86_64/os/
+baseurl=http://yum_local_repo_url_base/opendaylight/
enabled=1
gpgcheck=0
--- a/docker/base/cache/local/percona-release.repo
+++ b/docker/base/cache/local/percona-release.repo
@@ -3,14 +3,16 @@
########################################
[percona-release-$basearch]
name = Percona-Release YUM repository - $basearch
-baseurl = http://repo.percona.com/release/$releasever/RPMS/$basearch
+#baseurl = http://repo.percona.com/release/$releasever/RPMS/$basearch
+baseurl=http://yum_local_repo_url_base/percona-release-$basearch/
enabled = 1
gpgcheck = 1
gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Percona
[percona-release-noarch]
name = Percona-Release YUM repository - noarch
-baseurl = http://repo.percona.com/release/$releasever/RPMS/noarch
+#baseurl = http://repo.percona.com/release/$releasever/RPMS/noarch
+baseurl=http://yum_local_repo_url_base/percona-release-noarch/
enabled = 1
gpgcheck = 1
gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Percona
--- a/docker/base/cache/local/td.repo
+++ b/docker/base/cache/local/td.repo
@@ -1,5 +1,6 @@
[treasuredata]
name=TreasureData
-baseurl=http://packages.treasuredata.com/2/redhat/\$releasever/\$basearch
+#baseurl=http://packages.treasuredata.com/2/redhat/\$releasever/\$basearch
+baseurl=http://yum_local_repo_url_base/treasuredata/
gpgcheck=1
gpgkey=https://packages.treasuredata.com/GPG-KEY-td-agent
--- a/docker/base/cache/local/zookeeper.repo
+++ b/docker/base/cache/local/zookeeper.repo
@@ -1,6 +1,7 @@
[iwienand-zookeeper-el7]
name=Copr repo for zookeeper-el7 owned by iwienand
-baseurl=https://copr-be.cloud.fedoraproject.org/results/iwienand/zookeeper-el7/epel-7-$basearch/
+#baseurl=https://copr-be.cloud.fedoraproject.org/results/iwienand/zookeeper-el7/epel-7-$basearch/
+baseurl=http://yum_local_repo_url_base/iwienand-zookeeper-el7/
type=rpm-md
skip_if_unavailable=True
gpgcheck=1
- 创建
Yum
仓库测试镜像:
$ docker run -it --name yum-client -v /opt/:/opt/ centos:7 /bin/bash
$ mkdir -pv /tmp/rpms && cd /tmp/rpms
$ cp -vf /opt/kolla/kolla/docker/base/cache/rpms/*.rpm .
$ rpm -ivh *.rpm
$ cd - && rm -rfv /tmp/rpms
$ mkdir -pv /tmp/keys && cd /tmp/keys
$ cp -vf /opt/kolla/kolla/docker/base/cache/keys/{RPM-GPG-KEY-MariaDB,GPG-KEY-elasticsearch,influxdb.key,gpg.key,RPM-GPG-KEY-grafana,GPG-KEY-td-agent} .
$ rpm --import /tmp/keys/RPM-GPG-KEY-MariaDB \
&& rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-Percona \
&& rpm --import /tmp/keys/GPG-KEY-elasticsearch \
&& rpm --import /tmp/keys/influxdb.key \
&& rpm --import /tmp/keys/gpg.key \
&& rpm --import /tmp/keys/RPM-GPG-KEY-grafana \
&& rpm --import /tmp/keys/GPG-KEY-td-agent \
&& rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 \
&& rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-OpsTools \
&& rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage \
&& rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Virtualization
$ cd - && rm -rfv /tmp/keys
$ cd /etc/yum.repos.d/
$ rm -vf *.repo
$ cp -vf /opt/kolla/kolla/docker/base/cache/local/*.repo .
$ yum clean all && rm -rf /var/cache/yum
- 生成
Yum
仓库测试Docker
镜像,并删除容器:
$ docker commit -m "openstack pike yum client." -a "LastRitter<lastritter@gmail.com>" yum-client yum-client:pike
$ docker rm yum-client
- 导出
Yum
仓库同步Docker
镜像(可选):
$ docker save -o openstack_pike_yum_client_`date +%Y-%m-%d`.tar.gz yum-client:pike
- 导入
Yum
仓库同步Docker
镜像(可选):
$ docker load --input openstack_pike_yum_client_`date +%Y-%m-%d`.tar.gz
- 把
yum_local_repo_url_base
替换为实际使用的地址192.168.195.131:10022
,然后测试Yum
仓库:
$ docker run --rm -it yum-client:pike /bin/bash
$ ls /etc/yum.repos.d/*.repo | xargs sed -i 's/yum_local_repo_url_base/192.168.195.131:10022/g'
$ yum repolist
Loaded plugins: fastestmirror, ovl, priorities
base | 2.9 kB 00:00:00
centos-ceph-jewel | 2.9 kB 00:00:00
centos-openstack-pike | 2.9 kB 00:00:00
centos-opstools-release | 2.9 kB 00:00:00
centos-qemu-ev | 2.9 kB 00:00:00
elasticsearch-2.x | 2.9 kB 00:00:00
epel | 2.9 kB 00:00:00
extras | 2.9 kB 00:00:00
grafana | 2.9 kB 00:00:00
influxdb | 2.9 kB 00:00:00
iwienand-zookeeper-el7 | 2.9 kB 00:00:00
kibana-4.6 | 2.9 kB 00:00:00
mariadb | 2.9 kB 00:00:00
opendaylight | 2.9 kB 00:00:00
percona-release-noarch | 2.9 kB 00:00:00
percona-release-x86_64 | 2.9 kB 00:00:00
treasuredata | 2.9 kB 00:00:00
updates | 2.9 kB 00:00:00
(1/18): centos-ceph-jewel/primary_db | 62 kB 00:00:00
(2/18): elasticsearch-2.x/primary_db | 9.4 kB 00:00:00
(3/18): centos-opstools-release/primary_db | 155 kB 00:00:00
(4/18): centos-qemu-ev/primary_db | 34 kB 00:00:00
(5/18): grafana/primary_db | 12 kB 00:00:00
(6/18): centos-openstack-pike/primary_db | 933 kB 00:00:00
(7/18): influxdb/primary_db | 29 kB 00:00:00
(8/18): kibana-4.6/primary_db | 42 kB 00:00:00
(9/18): iwienand-zookeeper-el7/primary_db | 2.4 kB 00:00:00
(10/18): extras/primary_db | 184 kB 00:00:00
(11/18): epel/primary_db | 6.2 MB 00:00:00
(12/18): base/primary_db | 5.7 MB 00:00:00
(13/18): mariadb/primary_db | 21 kB 00:00:00
(14/18): percona-release-noarch/primary_db | 15 kB 00:00:00
(15/18): opendaylight/primary_db | 2.4 kB 00:00:00
(16/18): percona-release-x86_64/x86_64/primary_db | 40 kB 00:00:00
(17/18): treasuredata/primary_db | 47 kB 00:00:00
(18/18): updates/primary_db | 6.9 MB 00:00:00
Determining fastest mirrors
repo id repo name status
base CentOS-7 - Base 9591
centos-ceph-jewel CentOS-7 - Ceph Jewel 92
centos-openstack-pike CentOS-7 - OpenStack pike 2389
centos-opstools-release CentOS-7 - OpsTools - release 427
centos-qemu-ev CentOS-7 - QEMU EV 47
elasticsearch-2.x Elasticsearch repository for 2.x packages 22
epel Extra Packages for Enterprise Linux 7 - x86_64 12439
extras CentOS-7 - Extras 444
grafana grafana 33
influxdb InfluxDB Repository - RHEL 7 104
iwienand-zookeeper-el7 Copr repo for zookeeper-el7 owned by iwienand 1
kibana-4.6 Kibana repository for 4.6.x packages 14
mariadb MariaDB 15
opendaylight CentOS CBS OpenDaylight Release Repository 1
percona-release-noarch Percona-Release YUM repository - noarch 26
percona-release-x86_64/x86_64 Percona-Release YUM repository - x86_64 70
treasuredata TreasureData 15
updates CentOS-7 - Updates 2411
repolist: 28141
创建本地Pip仓库
初始化Pip容器
- 创建
pip-server
容器:
$ mkdir -pv /opt/pip
$ docker run -it --name pip-server -v /opt/:/opt/ centos:7 /bin/bash
- 安装基本的软件包:
$ yum install -y epel-release
$ yum install -y python-pip httpd-tools
$ pip install --upgrade pip
$ pip install pypiserver pip2pi passlib
启动Pip服务
- 设置密码,并启动
pipyi-server
服务器:
$ htpasswd -sc ~/.htaccess admin
New password: 123456
Re-type new password: 123456
Adding password for user admin
$ pypi-server -p 3141 -P ~/.htaccess /opt/pip
- 在
pip-server
容器中另外开启一个Shell,测试服务是否正常:
$ docker exec -it pip-server bash
$ curl -X GET http://localhost:3141
<html><head><title>Welcome to pypiserver!</title></head><body>
<h1>Welcome to pypiserver!</h1>
<p>This is a PyPI compatible package index serving 473 packages.</p>
<p> To use this server with pip, run the the following command:
<blockquote><pre>
pip install --extra-index-url http://localhost:3141/ PACKAGE [PACKAGE2...]
</pre></blockquote></p>
<p> To use this server with easy_install, run the the following command:
<blockquote><pre>
easy_install -i http://localhost:3141/simple/ PACKAGE
</pre></blockquote></p>
<p>The complete list of all packages can be found <a href="/packages/">here</a>
or via the <a href="/simple/">simple</a> index.</p>
<p>This instance is running version 1.2.1 of the
<a href="https://pypi.python.org/pypi/pypiserver">pypiserver</a> software.</p>
</body></html>
备份Pip镜像
- 生成
pip-server
镜像,并删除容器:
$ docker commit -m "openstack pike pip server." -a "LastRitter<lastritter@gmail.com>" pip-server pip-server:pike
$ docker rm pip-server
- 导出
pip-server
镜像(可选):
$ docker save -o openstack_pike_pip_server_`date +%Y-%m-%d`.tar.gz pip-server:pike
- 导入
pip-server
镜像(可选):
$ docker load --input openstack_pike_pip_server_`date +%Y-%m-%d`.tar.gz
测试Pip镜像
- 启动一个
pip-server
容器:
$ docker run --name=pip-server --restart=always -d -p 3141:3141 -v /opt/:/opt/ pip-server:pike pypi-server -p 3141 -P ~/.htaccess /opt/pip
- 在宿主机上测试
pip-server
是否正常:
$ curl -X GET http://192.168.195.131:3141
<html><head><title>Welcome to pypiserver!</title></head><body>
<h1>Welcome to pypiserver!</h1>
<p>This is a PyPI compatible package index serving 996 packages.</p>
<p> To use this server with pip, run the the following command:
<blockquote><pre>
pip install --extra-index-url http://192.168.195.131:3141/ PACKAGE [PACKAGE2...]
</pre></blockquote></p>
<p> To use this server with easy_install, run the the following command:
<blockquote><pre>
easy_install -i http://192.168.195.131:3141/simple/ PACKAGE
</pre></blockquote></p>
<p>The complete list of all packages can be found <a href="/packages/">here</a>
or via the <a href="/simple/">simple</a> index.</p>
<p>This instance is running version 1.2.1 of the
<a href="https://pypi.python.org/pypi/pypiserver">pypiserver</a> software.</p>
</body></html>
下载软件包
- 单独下载某个软件包:
$ docker exec -it pip-server bash
$ cd /opt/pip/ && pip download "tox==2.9.1"
- 批量下载软件包(具体要下载哪些包,可根据构建镜像的
Dockerfile.j2
来确定):
$ docker exec -it pip-server bash
$ vi /opt/requirements.txt
pytest===3.1.3
tox===2.9.1
$ cd /opt/pip/ && pip download -r /opt/requirements.txt
- 建立索引:
$ docker exec -it pip-server dir2pi --normalize-package-names /opt/pip/
# Or
$ docker run --rm -it -v /opt/:/opt/ pip-server:pike dir2pi --normalize-package-names /opt/pip/
使用Pip仓库
- 设置宿主机信任
pip-server
:
$ mkdir -pv ~/.pip/
$ vi ~/.pip/pip.conf
[global]
trusted-host = 192.168.195.131
index-url = http://192.168.195.131:3141/simple
- 安装一个之前缓存过的软件包:
# pip install -i http://192.168.195.131:3141/simple/ tox
$ pip install tox
Collecting tox
Downloading http://192.168.195.131:3141/packages/simple/tox/tox-2.9.1-py2.py3-none-any.whl (73kB)
100% |████████████████████████████████| 81kB 49.7MB/s
Requirement already satisfied: virtualenv>=1.11.2; python_version != "3.2" in /usr/lib/python2.7/site-packages (from tox)
Requirement already satisfied: pluggy<1.0,>=0.3.0 in /usr/lib/python2.7/site-packages (from tox)
Requirement already satisfied: six in /usr/lib/python2.7/site-packages (from tox)
Requirement already satisfied: py>=1.4.17 in /usr/lib/python2.7/site-packages (from tox)
Installing collected packages: tox
Successfully installed tox-2.9.1
$ docker logs pip-server
172.17.0.1 - - [31/Mar/2018 15:47:35] "GET / HTTP/1.1" 200 796
192.168.195.131 - - [31/Mar/2018 15:51:27] "GET / HTTP/1.1" 200 808
192.168.195.131 - - [31/Mar/2018 15:57:25] "GET /simple/tox/ HTTP/1.1" 200 464
192.168.195.131 - - [31/Mar/2018 15:57:25] "GET /packages/simple/tox/tox-2.9.1-py2.py3-none-any.whl HTTP/1.1" 200 73454
创建本地Git仓库
配置MySQL服务
- 启动
mysql-server
容器,设置密码为123456
:
$ docker run -d --name mysql-server -p 13306:3306 -e MYSQL_ROOT_PASSWORD=123456 mysql
- 进入
mysql-server
容器,并安装基本软件包:
$ docker exec -it mysql-server bash
$ apt-get update
$ apt-get install vim
- 配置
mysql-server
容器,增加UTF-8
支持:
$ vi /etc/mysql/mysql.conf.d/mysqld.cnf
[mysqld]
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
datadir = /var/lib/mysql
character-set-server = utf8
init_connect = 'SET NAMES utf8'
备份MySQL镜像
- 生成
mysql-server
镜像,并删除容器:
$ docker commit -m "mysql server." -a "LastRitter<lastritter@gmail.com>" mysql-server mysql-server
$ docker rm mysql-server
- 导出
mysql-server
镜像(可选):
$ docker save -o openstack_pike_mysql_server_`date +%Y-%m-%d`.tar.gz mysql-server
- 导入
mysql-server
镜像(可选):
$ docker load --input openstack_pike_mysql_server_`date +%Y-%m-%d`.tar.gz
启动MySQL服务
- 启动
mysql-server
容器:
$ mkdir -pv /opt/mysql
$ docker run -d --name mysql-server --restart=always -p 13306:3306 -e MYSQL_ROOT_PASSWORD=123456 -v /opt/mysql:/var/lib/mysql -v /etc/localtime:/etc/localtime mysql-server
- 在主机上测试
mysql-server
服务是否正常:
$ yum install -y mysql
$ mysql -h 192.168.195.131 -P 13306 -uroot -p123456
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.7.21 MySQL Community Server (GPL)
Copyright (c) 2000, 2017, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MySQL [(none)]> exit
Bye
创建Gogs数据库
进入mysql-server
容器,创建Gogs
数据库:
$ docker exec -it mysql-server bash
$ mysql -h 127.0.0.1 -uroot -p123456
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 4
Server version: 5.7.21 MySQL Community Server (GPL)
Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> create database gogs default character set utf8 collate utf8_general_ci;
Query OK, 1 row affected (0.00 sec)
mysql> exit
启动Gogs服务
- 启动
gogs-server
容器:
$ mkdir -pv /opt/gogs
$ docker run -d --name=gogs-server --restart=always -p 15555:22 -p 13000:3000 -v /opt/gogs:/data -v /etc/localtime:/etc/localtime gogs/gogs
- 登录
Gogs
服务Web
页面http://192.168.195.131:13000
,进行相关初始化设置,创建管理员账户,最后设置SSH Key
:
数据库主机:192.168.195.131:13306
数据库用户名:root
数据库用户密码:123456
域名:192.168.195.131
SSH端口:15555
HTTP端口:13000
应用URL:http://192.168.195.131:13000/
同步Nova源码
- 克隆官方源码:
$ git clone https://git.openstack.org/openstack/nova
# Or
$ git clone https://github.com/openstack/nova.git
- 在
Gogs
中创建Nova
项目,然后将其添加为远程仓库:
$ cd nova
$ git remote add local ssh://git@192.168.195.131:15555/lastritter/nova.git
$ git remote -v
local ssh://git@192.168.195.131:15555/lastritter/nova.git (fetch)
local ssh://git@192.168.195.131:15555/lastritter/nova.git (push)
origin https://github.com/openstack/nova.git (fetch)
origin https://github.com/openstack/nova.git (push)
- 推送
master
和stable/pike
分支到本地Gogs
仓库:
$ git push local master:master
$ git checkout -b pike/origin remotes/origin/stable/pike
$ git push local pike/origin:pike/origin
- 查看
16.1.0
版本Nova
源码的Tag
信息:
$ git show 16.1.0
tag 16.1.0
Tagger: OpenStack Release Bot <infra-root@openstack.org>
Date: Thu Feb 15 23:53:11 2018 +0000
nova 16.1.0 release
meta:version: 16.1.0
meta:diff-start: -
meta:series: pike
meta:release-type: release
meta:pypi: no
meta:first: no
meta:release:Author: Matt Riedemann <mriedem.os@gmail.com>
meta:release:Commit: Matt Riedemann <mriedem.os@gmail.com>
meta:release:Change-Id: I0c4d2dfc306d711b1f649d94782e8ae40475c43f
meta:release:Code-Review+1: Lee Yarwood <lyarwood@redhat.com>
meta:release:Code-Review+2: Sean McGinnis <sean.mcginnis@gmail.com>
meta:release:Code-Review+2: Tony Breeds <tony@bakeyournoodle.com>
meta:release:Workflow+1: Tony Breeds <tony@bakeyournoodle.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJahh1nAAoJEEIeZHKBH5qBF70H/ibKNK+jHbcJrqc49ZDC8bU5
86539Pa0QwPbrREwFFzGbt9w9I6Grh9gXa4BpsQAHGM+mRi0RBqCY2WdJ2PoKwEx
fH3bCUaYvS4JFUZgQGmpidWM4RPmhOZ4wmdkbqy4soBrncmMsBxnlJ/q91DlPWUd
KMH4LGInZ0xq3APvYTNP/H8nJttrIQbgy8hgVPrQ+SLw/1hqW9zSkRqHIBGjlcec
EvoQD+2CBQ8Cthn7lsB+5h7x+efYgv+3kAwzvBslMLDp6y+x9VEzkhQIDX4xaD7j
9dLv+0p/OevA7gmC54rTs15R00qf2JzaMqaQ2tUg7HER2S33OfEOfbkcIqd3B78=
=yqzR
-----END PGP SIGNATURE-----
commit 806eda3da84d6f9b47c036ff138415458b837536
Merge: 6d06aa4 6d1877b
Author: Zuul <zuul@review.openstack.org>
Date: Tue Feb 13 17:31:57 2018 +0000
Merge "Query all cells for service version in _validate_bdm" into stable/pike
- 创建
16.1.0
版本Nova
的开发分支,并推送到本地Git
仓库:
$ git checkout -b pike/devel
$ git reset --hard 16.1.0
HEAD 现在位于 806eda3 Merge "Query all cells for service version in _validate_bdm" into stable/pike
$ git push -u local pike/devel:pike/devel
Total 0 (delta 0), reused 0 (delta 0)
To ssh://git@192.168.195.131:15555/lastritter/nova.git
* [new branch] pike/devel -> pike/devel
分支 pike/devel 设置为跟踪来自 local 的远程分支 pike/devel。
- 查看分支状况:
$ git branch -av
master c683518 Merge "Fix allocation_candidates not to ignore shared RPs"
* pike/devel 806eda3 Merge "Query all cells for service version in _validate_bdm" into stable/pike
pike/origin 708342f Merge "compute: Cleans up allocations after failed resize" into stable/pike
remotes/origin/HEAD -> origin/master
remotes/origin/master c683518 Merge "Fix allocation_candidates not to ignore shared RPs"
remotes/origin/stable/ocata 781e7b3 Merge "Don't try to delete build request during a reschedule" into stable/ocata
remotes/origin/stable/pike 708342f Merge "compute: Cleans up allocations after failed resize" into stable/pike
remotes/origin/stable/queens 307382f Use ksa session for cinder microversion check
remotes/work/master c683518 Merge "Fix allocation_candidates not to ignore shared RPs"
remotes/work/pike/devel 806eda3 Merge "Query all cells for service version in _validate_bdm" into stable/pike
remotes/work/pike/origin 708342f Merge "compute: Cleans up allocations after failed resize" into stable/pike
开始构建镜像
修改镜像模板
- 修改
Base
镜像Dockerfile
模板文件(docker/base/Dockerfile.j2
),把BEGIN REPO ENABLEMENT
和END REPO ENABLEMENT
之间的部分替换成如下命令:
RUN mkdir -pv /tmp/rpms /tmp/keys
COPY cache/rpms/* /tmp/rpms/
COPY cache/keys/* /tmp/keys/
COPY cache/local_repo.conf /tmp/local_repo.conf
RUN rpm -ivh /tmp/rpms/*.rpm
RUN rpm --import /tmp/keys/RPM-GPG-KEY-MariaDB \
&& rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-Percona \
&& rpm --import /tmp/keys/GPG-KEY-elasticsearch \
&& rpm --import /tmp/keys/influxdb.key \
&& rpm --import /tmp/keys/gpg.key \
&& rpm --import /tmp/keys/RPM-GPG-KEY-grafana \
&& rpm --import /tmp/keys/GPG-KEY-td-agent \
&& rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 \
&& rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-OpsTools \
&& rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage \
&& rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Virtualization
RUN rm -rfv /tmp/rpms /tmp/keys /etc/yum.repos.d/*
COPY cache/local/* /etc/yum.repos.d/
RUN yum_local_repo_url_base=`cat /tmp/local_repo.conf`;ls /etc/yum.repos.d/*.repo | xargs sed -i 's/yum_local_repo_url_base/'$yum_local_repo_url_base'/g'
RUN rm -rfv /tmp/local_repo.conf
- 新增本地Repo地址配置文件:
$ vi docker/base/cache/local_repo.conf
192.168.195.131:10022
- 下载
dumb-init
文件:
$ export http_proxy=192.168.195.1:1080; export https_proxy=$http_proxy
$ curl -sSL https://github.com/Yelp/dumb-init/releases/download/v1.1.3/dumb-init_1.1.3_amd64 -o /opt/kolla/kolla/docker/base/cache/dumb-init
- 使用本地
dumb-init
文件:
--- a/docker/base/Dockerfile.j2
+++ b/docker/base/Dockerfile.j2
@@ -253,8 +253,8 @@ COPY curlrc /root/.curlrc
{% if base_arch == 'x86_64' %}
-RUN curl -sSL https://github.com/Yelp/dumb-init/releases/download/v1.1.3/dumb-init_1.1.3_amd64 -o /usr/local/bin/dumb-init \
- && chmod +x /usr/local/bin/dumb-init \
+COPY cache/dumb-init /usr/local/bin/dumb-init
+RUN chmod +x /usr/local/bin/dumb-init \
&& sed -i 's|#!|#!/usr/local/bin/dumb-init |' /usr/local/bin/kolla_start
{% else %}
- 下载
get-pip.py
和requirements-stable-pike.tar.gz
文件:
$ mkdir -pv /opt/kolla/kolla/docker/openstack-base/cache
$ curl https://bootstrap.pypa.io/get-pip.py -o /opt/kolla/kolla/docker/openstack-base/cache/get-pip.py
$ curl http://tarballs.openstack.org/requirements/requirements-stable-pike.tar.gz -o /opt/kolla/kolla/docker/openstack-base/cache/requirements-stable-pike.tar.gz
- 修改对应的
Dockerfile
模板:
--- a/docker/openstack-base/Dockerfile.j2
+++ b/docker/openstack-base/Dockerfile.j2
@@ -276,8 +276,8 @@ ENV DEBIAN_FRONTEND noninteractive
{{ macros.install_packages(openstack_base_packages | customizable("packages")) }}
{% block source_install_python_pip %}
-RUN curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py \
- && python get-pip.py \
+COPY cache/get-pip.py get-pip.py
+RUN python get-pip.py \
&& rm get-pip.py
{% endblock %}
@@ -393,7 +393,8 @@ RUN python get-pip.py \
]
%}
-ADD openstack-base-archive /openstack-base-source
+COPY cache/requirements-stable-pike.tar.gz /tmp/requirements-stable-pike.tar.gz
+RUN mkdir -pv /openstack-base-source && tar xvf /tmp/requirements-stable-pike.tar.gz -C /openstack-base-source && rm -rfv /tmp/requirements-stable-pike.tar.gz
RUN ln -s openstack-base-source/* /requirements \
&& mkdir -p /var/lib/kolla \
&& {{ macros.install_pip(['virtualenv'], constraints = false)}} \
- 使用本地
Pip
服务:
--- a/docker/openstack-base/Dockerfile.j2
+++ b/docker/openstack-base/Dockerfile.j2
@@ -280,7 +280,10 @@ ENV DEBIAN_FRONTEND noninteractive
#COPY cache/get-pip.py get-pip.py
#RUN python get-pip.py \
# && rm get-pip.py
-RUN pip install --upgrade pip
+RUN mkdir -pv ~/.pip/
+COPY cache/local_repo.conf /tmp/local_repo.conf
+COPY cache/pip.conf /root/.pip/pip.conf
+RUN pip_ip=`cat /tmp/local_repo.conf | awk -F : '{print $1}'`; pip_port=`cat /tmp/local_repo.conf | awk -F : '{print $2}'`; sed -i 's/pip_local_repo_ip/'$pip_ip'/g' /root/.pip/pip.conf; sed -i 's/pip_local_repo_port/'$pip_port'/g' /root/.pip/pip.conf; pip install --upgrade pip && rm -rfv /tmp/local_repo.conf
{% endblock %}
{% set openstack_base_pip_packages = [
--- /dev/null
+++ b/docker/openstack-base/cache/local_repo.conf
@@ -0,0 +1 @@
+192.168.195.131:3141
--- /dev/null
+++ b/docker/openstack-base/cache/pip.conf
@@ -0,0 +1,3 @@
+[global]
+trusted-host = pip_local_repo_ip
+index-url = http://pip_local_repo_ip:pip_local_repo_port/simple
基本使用说明
命令帮助查看:
$ python tools/build.py --help
usage: kolla-build [-h] [--base BASE] [--base-arch BASE_ARCH]
[--base-image BASE_IMAGE] [--base-tag BASE_TAG]
[--build-args BUILD_ARGS] [--cache] [--config-dir DIR]
[--config-file PATH] [--debug] [--docker-dir DOCKER_DIR]
[--format FORMAT] [--keep] [--list-dependencies]
[--list-images] [--logs-dir LOGS_DIR]
[--namespace NAMESPACE] [--nocache] [--nodebug] [--nokeep]
[--nolist-dependencies] [--nolist-images] [--nopull]
[--nopush] [--noskip-existing] [--noskip-parents]
[--notemplate-only] [--profile PROFILE] [--pull] [--push]
[--push-threads PUSH_THREADS] [--registry REGISTRY]
[--retries RETRIES] [--save-dependency SAVE_DEPENDENCY]
[--skip-existing] [--skip-parents] [--tag TAG]
[--tarballs-base TARBALLS_BASE] [--template-only]
[--template-override TEMPLATE_OVERRIDE] [--threads THREADS]
[--timeout TIMEOUT] [--type INSTALL_TYPE] [--version]
[--work-dir WORK_DIR]
[regex [regex ...]]
positional arguments:
regex Build only images matching regex and its dependencies
optional arguments:
-h, --help show this help message and exit
--base BASE, -b BASE The distro type of the base image. Allowed values are
centos, rhel, ubuntu, oraclelinux, debian Allowed
values: centos, rhel, ubuntu, oraclelinux, debian
--base-arch BASE_ARCH
The base architecture. Default is same as host Allowed
values: x86_64, ppc64le, aarch64
--base-image BASE_IMAGE
The base image name. Default is the same with base.
For non-x86 architectures use full name like
"aarch64/debian".
--base-tag BASE_TAG The base distro image tag
--build-args BUILD_ARGS
Set docker build time variables
--cache Use the Docker cache when building
--config-dir DIR Path to a config directory to pull `*.conf` files
from. This file set is sorted, so as to provide a
predictable parse order if individual options are
over-ridden. The set is parsed after the file(s)
specified via previous --config-file, arguments hence
over-ridden options in the directory take precedence.
--config-file PATH Path to a config file to use. Multiple config files
can be specified, with values in later files taking
precedence. Defaults to None.
--debug, -d Turn on debugging log level
--docker-dir DOCKER_DIR, -D DOCKER_DIR
Path to additional docker file template directory
--format FORMAT, -f FORMAT
Format to write the final results in Allowed values:
json, none
--keep Keep failed intermediate containers
--list-dependencies, -l
Show image dependencies (filtering supported)
--list-images Show all available images (filtering supported)
--logs-dir LOGS_DIR Path to logs directory
--namespace NAMESPACE, -n NAMESPACE
The Docker namespace name
--nocache The inverse of --cache
--nodebug The inverse of --debug
--nokeep The inverse of --keep
--nolist-dependencies
The inverse of --list-dependencies
--nolist-images The inverse of --list-images
--nopull The inverse of --pull
--nopush The inverse of --push
--noskip-existing The inverse of --skip-existing
--noskip-parents The inverse of --skip-parents
--notemplate-only The inverse of --template-only
--profile PROFILE, -p PROFILE
Build a pre-defined set of images, see [profiles]
section in config. The default profiles are: infra,
main, aux, default, gate
--pull Attempt to pull a newer version of the base image
--push Push images after building
--push-threads PUSH_THREADS
The number of threads to user while pushing Images.
Note: Docker can not handle threading push properly
--registry REGISTRY The docker registry host. The default registry host is
Docker Hub
--retries RETRIES, -r RETRIES
The number of times to retry while building
--save-dependency SAVE_DEPENDENCY
Path to the file to store the docker image dependency
in Graphviz dot format
--skip-existing Do not rebuild images present in the docker cache
--skip-parents Do not rebuild parents of matched images
--tag TAG The Docker tag
--tarballs-base TARBALLS_BASE
Base url to OpenStack tarballs
--template-only Don't build images. Generate Dockerfile only
--template-override TEMPLATE_OVERRIDE
Path to template override file
--threads THREADS, -T THREADS
The number of threads to use while building. (Note:
setting to one will allow real time logging)
--timeout TIMEOUT Time in seconds after which any operation times out
--type INSTALL_TYPE, -t INSTALL_TYPE
The method of the OpenStack install. Allowed values
are binary, source, rdo, rhos Allowed values: binary,
source, rdo, rhos
--version show program's version number and exit
--work-dir WORK_DIR Path to be used as working directory.By default, a
temporary dir is created
--base BASE, -b BASE
指定Base镜像的发行版,默认值是centos
,可选的值有:centos
,rhel
,ubuntu
,oraclelinux
,debian
;--base-arch BASE_ARCH
指定系统架构,默认值与主机相同,可选的值有:x86_64
,ppc64le
,aarch64
;--base-image BASE_IMAGE
指定Base镜像名称,默认与--base
参数相同,对于非x86
架构,使用全名,比如aarch64/debian
;--base-tag BASE_TAG
指定Base
发行版的镜像Tag
;--build-args BUILD_ARGS
设置Docker
构建时的变量;--cache
构建时使用缓存;--config-dir DIR
指定*.conf
配置文件目录,配置文件按顺序执行,且在--config-file
指定的文件之后;--config-file PATH
指定要使用的配置文件,可以指定多次;--debug, -d
开启Debug日志;--docker-dir DOCKER_DIR, -D DOCKER_DIR
指定Dockerfile
临时目录;--format FORMAT, -f FORMAT
指定执行结果格式,可选值有:json
,none
;--keep
保留失败的临时容器;--list-dependencies, -l
显示镜像依赖(支持过滤);--list-images
显示所有可用镜像(支持过滤);--logs-dir LOGS_DIR
指定日志目录;--namespace NAMESPACE
指定镜像的名字空间;--nocache
不使用缓存;--nodebug
不显示调试信息;--nokeep
不保留临时容器;--nolist-dependencies
不显示镜像依赖;--nolist-images
不显示可构建镜像;--nopull
不尝试拉取新版Base镜像;--nopush
构建完成后不push镜像;--noskip-existing
构建时不跳过存在的镜像;--noskip-parents
构建时不跳过依赖镜像;--notemplate-only
生成Dockerfile
,同时构建镜像;--profile PROFILE, -p PROFILE
构建配置文件[profiles]
节中预定义的镜像,默认值有:infra
,main
,aux
,default
,gate
;--pull
构建时试图拉取最新的Base
镜像;--push
构建完毕后push
镜像;--push-threads PUSH_THREADS
指定push
时的线程数;--registry REGISTRY
指定push
的Docker Registry
,默认是Docker Hub
;--retries RETRIES, -r RETRIES
指定构建时重试次数;--save-dependency SAVE_DEPENDENCY
保存Graphviz dot format
格式的镜像依赖关系到指定路径。--skip-existing
不构建Docker
缓存中存在的镜像;--skip-parents
不构建依赖的镜像;--tag TAG
指定生成的镜像Tag
;--tarballs-base TARBALLS_BASE
指定OpenStack
压缩包Base URL
;--template-only
只生成Dockerfile
,不构建镜像;--template-override TEMPLATE_OVERRIDE
指定临时覆盖文件;--threads THREADS, -T THREADS
指定构建线程数量;--timeout TIMEOUT
设置操作超时时间,单位是秒;--type INSTALL_TYPE, -t INSTALL_TYPE
设置OpenStack
安装类型,可选的值有:binary
,source
,rdo
,rhos
;--version
显示程序版本;--work-dir WORK_DIR
指定工作目录,默认使用临时目录。
正式开始构建
- 使用默认配置,构建所有源码镜像:
$ python tools/build.py -t source
- 使用默认配置,不使用缓存,构建
openstack-base
和所依赖的源码镜像:
$ python tools/build.py -t source --nocache openstack-base
- 使用
profile
构建openstack-base
镜像:
$ cp -v /opt/kolla/kolla/etc/kolla/kolla-build.conf /opt/kolla/
$ vi /opt/kolla/kolla-build.conf
[profiles]
myprofile=openstack-base
$ python tools/build.py -t source --debug --nocache --nopull --work-dir /opt/kolla/ --config-file /opt/kolla/kolla-build.conf --profile myprofile
- 使用默认配置,不使用缓存,不构建或者
pull
依赖的镜像,构建nova-base
源码镜像:
$ python tools/build.py -t source --nocache --skip-parents --nopull nova-base
- 使用本地源码构建
nova-base
镜像:
$ mkdir -pv docker/nova/nova-base/cache
$ wget http://tarballs.openstack.org/nova/nova-16.1.0.tar.gz -O /opt/kolla/kolla/docker/nova/nova-base/cache/nova-16.1.0.tar.gz
$ wget http://tarballs.openstack.org/blazar/blazar-0.3.0.tar.gz -O /opt/kolla/kolla/docker/nova/nova-base/cache/blazar-0.3.0.tar.gz
$ vi /opt/kolla/kolla-build.conf
[nova-base]
type = local
location = /opt/kolla/kolla/docker/nova/nova-base/cache/nova-16.1.0.tar.gz
[nova-base-plugin-blazar]
type = local
location = /opt/kolla/kolla/docker/nova/nova-base/cache/blazar-0.3.0.tar.gz
$ python tools/build.py -t source --nocache --skip-parents --nopull --config-file /opt/kolla/kolla-build.conf nova-base
- 使用
Git
源码构建镜(不使用--nocache
参数,也可以正确更新源码):
$ vi /opt/kolla/kolla-build.conf
[nova-base]
type = git
location = http://192.168.195.131:13000/lastritter/nova.git
reference = pike/devel
[nova-base-plugin-blazar]
type = local
location = /opt/kolla/kolla/docker/nova/nova-base/cache/blazar-0.3.0.tar.gz
$ python tools/build.py -t source --skip-parents --nopull --config-file /opt/kolla/kolla-build.conf nova-base
创建构建镜像
- 启动
Docker In Docker
服务镜像:
$ docker run --privileged --name dind --restart=always -d docker:stable-dind
$ docker exec -it dind sh
$ vi /etc/docker/daemon.json
{
"registry-mirrors": ["https://7g5a4z30.mirror.aliyuncs.com"]
}
$ docker restart dind
- 启动
Docker In Docker
客户端镜像,并安装基本的软件包:
$ docker run -it --name kolla-build --link dind:docker docker:edge sh
$ apk update
$ apk add git py-pip gcc python-dev linux-headers libffi-dev musl-dev openssl-dev perl python sshpass
$ pip install --upgrade pip
$ pip install pyopenssl tox
- 获取
Kolla
源码:
$ git clone https://gitee.com/lastritter/kolla.git
$ cd kolla
# git checkout -b devel/pike remotes/origin/devel/pike
- 生成配置文件:
$ pip install -r requirements.txt -r test-requirements.txt
$ tox -e genconfig
$ mkdir -pv /etc/kolla/
$ cp -v etc/kolla/kolla-build.conf /etc/kolla/
- 生成
Kolla
构建Docker
镜像,并删除容器:
$ docker commit -m "openstack pike kolla build." -a "LastRitter<lastritter@gmail.com>" kolla-build kolla-build:pike
$ docker rm kolla-build
- 导出
Kolla
构建Docker
镜像(可选):
$ docker save -o openstack_pike_kolla_build_`date +%Y-%m-%d`.tar.gz kolla-build:pike
- 导入
Kolla
构建Docker
镜像(可选):
$ docker load --input openstack_pike_kolla_build_`date +%Y-%m-%d`.tar.gz
快速部署环境
使用前面生成的Docker
镜像,以及修改过的Kolla
源码,快速部署新的构建环境,也可以只部署Kolla-Build
镜像,其他的使用原有服务。部署环境的主机IP
是172.29.101.166
。
部署Yum服务
- 导入
Yum
同步镜像:
$ mkdir -pv /opt/yum/repo
$ docker load --input /path/to/openstack_pike_yum_sync_2018-03-27.tar.gz
- 同步远程
Yum
源:
$ docker run --rm -it -v /opt:/opt/ yum-sync:pike reposync -p /opt/yum/repo/
# Or
$ docker run --rm -it -v /opt:/opt/ yum-sync:pike reposync -p /opt/yum/repo/ --repoid=base
- 创建本地
Yum
仓库软件包索引文件:
$ yum install -y createrepo
$ ls /opt/yum/repo/ | xargs -I {} createrepo -p /opt/yum/repo/{}
- 启动
Yum
仓库服务,暴露12222
端口:
$ docker run --name=yum-server --restart=always -d -p 12222:80 -v /opt/yum/repo:/usr/share/nginx/html nginx
- 测试
Yum
仓库服务是否正常:
$ curl -X GET http://172.29.101.166:12222/base/repodata/repomd.xml
- 导入
Yum
仓库测试Docker
镜像:
$ docker load --input /path/to/openstack_pike_yum_client_2018-03-30.tar.gz
- 测试
Yum
仓库是否正常:
$ docker run --rm -it yum-client:pike /bin/bash
$ ls /etc/yum.repos.d/*.repo | xargs sed -i 's/yum_local_repo_url_base/172.29.101.166:12222/g'
$ yum repolist
部署Pip服务
- 导入
pip-server
镜:
$ docker load --input openstack_pike_pip_server_2018-03-31.tar.gz
- 启动一个
pip-server
容器:
$ mkdir -pv /opt/pip
$ docker run --name=pip-server --restart=always -d -p 3141:3141 -v /opt/:/opt/ pip-server:pike pypi-server -p 3141 -P ~/.htaccess /opt/pip
- 批量下载软件包:
$ vi /opt/requirements.txt
$ docker run --rm -it -v /opt/:/opt/ pip-server:pike bash
$ cd /opt/pip/ && pip download -r /opt/requirements.txt
- 建立索引:
$ docker run --rm -it -v /opt/:/opt/ pip-server:pike dir2pi --normalize-package-names /opt/pip/
部署Kolla环境
- 导入
Kolla-Build
镜像
$ docker load --input openstack_pike_kolla_build_2018-03-30.tar.gz
- 启动
Kolla-Build
镜像:
$ docker run --privileged --name dind --restart=always -d docker:stable-dind
$ docker exec -it dind sh
$ vi /etc/docker/daemon.json
{
"registry-mirrors": ["https://7g5a4z30.mirror.aliyuncs.com"]
}
$ docker restart dind
$ docker run -it --name kolla-build --link dind:docker kolla-build:pike sh
- 替换本地
Yum
和Pip
服务的IP
地址:
$ cd kolla && git pull
$ vi docker/base/cache/local_repo.conf
172.29.101.166:12222
$ vi docker/openstack-basebase/cache/local_repo.conf
172.29.101.166:3141
正式开始构建
- 构建
openstack-base
镜像:
$ docker pull centos:7
$ python tools/build.py -t source --nocache openstack-base
# Or
$ python tools/build.py -t source --skip-parents openstack-base
- 查看构建成功的镜像:
$ docker run -it --rm --link dind:docker kolla-build:pike docker images
镜像启动分析
为了方便理解,使用如下命令生成的Dockerfile
来进行分析:
$ python tools/build.py -t source --template-only --work-dir=..
- 查看
Base
镜像的Dockerfile
,容器启动时,使用dumb-init
程序来创建初始化进程环境,执行kolla_start
(来源于start.sh
脚本)命令(../docker/base/Dockerfile
):
COPY set_configs.py /usr/local/bin/kolla_set_configs
COPY start.sh /usr/local/bin/kolla_start
COPY sudoers /etc/sudoers
COPY curlrc /root/.curlrc
COPY cache/dumb-init /usr/local/bin/dumb-init
RUN chmod +x /usr/local/bin/dumb-init \
&& sed -i 's|#!|#!/usr/local/bin/dumb-init |' /usr/local/bin/kolla_start
RUN touch /usr/local/bin/kolla_extend_start \
&& chmod 755 /usr/local/bin/kolla_start /usr/local/bin/kolla_extend_start /usr/local/bin/kolla_set_configs \
&& chmod 440 /etc/sudoers \
&& mkdir -p /var/log/kolla \
&& chown :kolla /var/log/kolla \
&& chmod 2775 /var/log/kolla \
&& rm -f /tmp/kolla_bashrc
CMD ["kolla_start"]
start.sh
脚本首先执行kolla_set_configs
(由docker/base/set_configs.py
复制生成)来初始化配置,然后执行扩展启动命令kolla_extend_start
(由派生的镜像提供),最后执行/run_command
中指定的命令启动服务(docker/base/start.sh
):
#!/bin/bash
set -o errexit
# Processing /var/lib/kolla/config_files/config.json as root. This is necessary
# to permit certain files to be controlled by the root user which should
# not be writable by the dropped-privileged user, especially /run_command
sudo -E kolla_set_configs
CMD=$(cat /run_command)
ARGS=""
if [[ ! "${!KOLLA_SKIP_EXTEND_START[@]}" ]]; then
# Run additional commands if present
. kolla_extend_start
fi
echo "Running command: '${CMD}${ARGS:+ $ARGS}'"
exec ${CMD} ${ARGS}
You have new mail.
kolla_set_configs
主函数(docker/base/start.sh):
def main():
try:
parser = argparse.ArgumentParser()
parser.add_argument('--check',
action='store_true',
required=False,
help='Check whether the configs changed')
args = parser.parse_args()
config = load_config()
if args.check:
execute_config_check(config)
else:
execute_config_strategy(config)
except ExitingException as e:
LOG.error("%s: %s", e.__class__.__name__, e)
return e.exit_code
except Exception:
LOG.exception('Unexpected error:')
return 2
return 0
if __name__ == "__main__":
sys.exit(main())
kolla_set_configs
首先加载配置文件,如果没有定义KOLLA_CONFIG
环境变量,则加载/var/lib/kolla/config_files/config.json
这个JSon
配置文件(docker/base/start.sh
):
def load_config():
def load_from_env():
config_raw = os.environ.get("KOLLA_CONFIG")
if config_raw is None:
return None
# Attempt to read config
try:
return json.loads(config_raw)
except ValueError:
raise InvalidConfig('Invalid json for Kolla config')
def load_from_file():
config_file = os.environ.get("KOLLA_CONFIG_FILE")
if not config_file:
config_file = '/var/lib/kolla/config_files/config.json'
LOG.info("Loading config file at %s", config_file)
# Attempt to read config file
with open(config_file) as f:
try:
return json.load(f)
except ValueError:
raise InvalidConfig(
"Invalid json file found at %s" % config_file)
except IOError as e:
raise InvalidConfig(
"Could not read file %s: %r" % (config_file, e))
config = load_from_env()
if config is None:
config = load_from_file()
LOG.info('Validating config file')
validate_config(config)
return config
kolla_set_configs
最后根据配置文件和环境变量复制OpenStack
组件的配置文件,并把启动命令写入/run_command
文件(docker/base/start.sh
):
def execute_config_strategy(config):
config_strategy = os.environ.get("KOLLA_CONFIG_STRATEGY")
LOG.info("Kolla config strategy set to: %s", config_strategy)
if config_strategy == "COPY_ALWAYS":
copy_config(config)
handle_permissions(config)
elif config_strategy == "COPY_ONCE":
if os.path.exists('/configured'):
raise ImmutableConfig(
"The config strategy prevents copying new configs",
exit_code=0)
else:
copy_config(config)
handle_permissions(config)
os.mknod('/configured')
else:
raise InvalidConfig('KOLLA_CONFIG_STRATEGY is not set properly')
def copy_config(config):
if 'config_files' in config:
LOG.info('Copying service configuration files')
for data in config['config_files']:
config_file = ConfigFile(**data)
config_file.copy()
else:
LOG.debug('No files to copy found in config')
LOG.info('Writing out command to execute')
LOG.debug("Command is: %s", config['command'])
# The value from the 'command' key will be written to '/run_command'
with open('/run_command', 'w+') as f:
f.write(config['command'])
- 查看
nova_compute
容器中的环境变量和配置:
$ docker exec -it nova_compute bash
$ env | grep KOLLA
KOLLA_CONFIG_STRATEGY=COPY_ALWAYS
KOLLA_BASE_DISTRO=centos
KOLLA_INSTALL_TYPE=source
PS1=$(tput bold)($(printenv KOLLA_SERVICE_NAME))$(tput sgr0)[$(id -un)@$(hostname -s) $(pwd)]$
KOLLA_SERVICE_NAME=nova-compute
KOLLA_INSTALL_METATYPE=mixed
$ cat /run_command
nova-compute
$ cat /var/lib/kolla/config_files/config.json
{
"command": "nova-compute",
"config_files": [
{
"source": "/var/lib/kolla/config_files/nova.conf",
"dest": "/etc/nova/nova.conf",
"owner": "nova",
"perm": "0600"
},
{
"source": "/var/lib/kolla/config_files/policy.json",
"dest": "/etc/nova/policy.json",
"owner": "nova",
"perm": "0600",
"optional": true
} ],
"permissions": [
{
"path": "/var/log/kolla/nova",
"owner": "nova:nova",
"recurse": true
},
{
"path": "/var/lib/nova",
"owner": "nova:nova",
"recurse": true
}
]
}
Kolla源码分析
命令总览
- 查看
build.py
命令,是一个符号链接:
$ ll tools/build.py
lrwxrwxrwx 1 root root 21 3月 20 13:30 tools/build.py -> ../kolla/cmd/build.py
- 命令入口,调用
kolla.image.build
模块的run_build
方法(kolla/cmd/build.py):
import os
import sys
# NOTE(SamYaple): Update the search path to prefer PROJECT_ROOT as the source
# of packages to import if we are using local tools instead of
# pip installed kolla tools
PROJECT_ROOT = os.path.abspath(os.path.join(
os.path.dirname(os.path.realpath(__file__)), '../..'))
if PROJECT_ROOT not in sys.path:
sys.path.insert(0, PROJECT_ROOT)
from kolla.image import build
def main():
statuses = build.run_build()
if statuses:
(bad_results, good_results, unmatched_results,
skipped_results) = statuses
if bad_results:
return 1
return 0
if __name__ == '__main__':
sys.exit(main())
- 命令主函数(kolla/image/build.py):
def run_build():
"""Build container images.
:return: A 3-tuple containing bad, good, and unmatched container image
status dicts, or None if no images were built.
"""
conf = cfg.ConfigOpts()
common_config.parse(conf, sys.argv[1:], prog='kolla-build')
if conf.debug:
LOG.setLevel(logging.DEBUG)
kolla = KollaWorker(conf)
kolla.setup_working_dir()
kolla.find_dockerfiles()
kolla.create_dockerfiles()
if conf.template_only:
LOG.info('Dockerfiles are generated in %s', kolla.working_dir)
return
# We set the atime and mtime to 0 epoch to preserve allow the Docker cache
# to work like we want. A different size or hash will still force a rebuild
kolla.set_time()
if conf.save_dependency:
kolla.build_image_list()
kolla.find_parents()
kolla.filter_images()
kolla.save_dependency(conf.save_dependency)
LOG.info('Docker images dependency are saved in %s',
conf.save_dependency)
return
if conf.list_images:
kolla.build_image_list()
kolla.find_parents()
kolla.filter_images()
kolla.list_images()
return
if conf.list_dependencies:
kolla.build_image_list()
kolla.find_parents()
kolla.filter_images()
kolla.list_dependencies()
return
push_queue = six.moves.queue.Queue()
queue = kolla.build_queue(push_queue)
workers = []
with join_many(workers):
try:
for x in six.moves.range(conf.threads):
worker = WorkerThread(conf, queue)
worker.setDaemon(True)
worker.start()
workers.append(worker)
for x in six.moves.range(conf.push_threads):
worker = WorkerThread(conf, push_queue)
worker.setDaemon(True)
worker.start()
workers.append(worker)
# sleep until queue is empty
while queue.unfinished_tasks or push_queue.unfinished_tasks:
time.sleep(3)
# ensure all threads exited happily
push_queue.put(WorkerThread.tombstone)
queue.put(WorkerThread.tombstone)
except KeyboardInterrupt:
for w in workers:
w.should_stop = True
push_queue.put(WorkerThread.tombstone)
queue.put(WorkerThread.tombstone)
raise
results = kolla.summary()
kolla.cleanup()
if conf.format == 'json':
print(json.dumps(results))
return kolla.get_image_statuses()
参数解析
- 解析配置文件和命令行参数(kolla/image/build.py):
from oslo_config import cfg
#...
def run_build():
#...
conf = cfg.ConfigOpts()
common_config.parse(conf, sys.argv[1:], prog='kolla-build')
#...
- 参数解析(kolla/common/config.py):
def parse(conf, args, usage=None, prog=None,
default_config_files=None):
conf.register_cli_opts(_CLI_OPTS)
conf.register_opts(_BASE_OPTS)
conf.register_opts(_PROFILE_OPTS, group='profiles')
for name, opts in gen_all_source_opts():
conf.register_opts(opts, name)
for name, opts in gen_all_user_opts():
conf.register_opts(opts, name)
conf(args=args,
project='kolla',
usage=usage,
prog=prog,
version=version.cached_version_string(),
default_config_files=default_config_files)
# NOTE(jeffrey4l): set the default base tag based on the
# base option
conf.set_default('base_tag', DEFAULT_BASE_TAGS.get(conf.base))
if not conf.base_image:
conf.base_image = conf.base
- 查看默认和可选参数配置(kolla/common/config.py):
#...
BASE_OS_DISTRO = ['centos', 'rhel', 'ubuntu', 'oraclelinux', 'debian']
BASE_ARCH = ['x86_64', 'ppc64le', 'aarch64']
DEFAULT_BASE_TAGS = {
'centos': '7',
'rhel': '7',
'oraclelinux': '7-slim',
'debian': 'stretch',
'ubuntu': '16.04',
}
DISTRO_RELEASE = {
'centos': '7',
'rhel': '7',
'oraclelinux': '7',
'debian': 'stretch',
'ubuntu': '16.04',
}
# This is noarch repository so we will use it on all architectures
DELOREAN = \
"https://trunk.rdoproject.org/centos7/current-passed-ci/delorean.repo"
# TODO(hrw): with move to Pike+1 we need to make sure that aarch64 repo
# gets updated (docker/base/aarch64-cbs.repo file)
# there is ongoing work to sort that out
DELOREAN_DEPS = {
'x86_64': "https://trunk.rdoproject.org/centos7/delorean-deps.repo",
'aarch64': "",
'ppc64le': ""
}
INSTALL_TYPE_CHOICES = ['binary', 'source', 'rdo', 'rhos']
TARBALLS_BASE = "http://tarballs.openstack.org"
#...
SOURCES = {
'openstack-base': {
'type': 'url',
'location': ('$tarballs_base/requirements/'
'requirements-stable-pike.tar.gz')},
'aodh-base': {
'type': 'url',
'location': ('$tarballs_base/aodh/'
'aodh-5.1.0.tar.gz')},
'barbican-base': {
'type': 'url',
'location': ('$tarballs_base/barbican/'
'barbican-5.0.0.tar.gz')},
'bifrost-base': {
'type': 'url',
'location': ('$tarballs_base/bifrost/'
'bifrost-4.0.1.tar.gz')},
#...
准备构建环境
- 创建
KollaWorker
对象(kolla/image/build.py):
#...
def run_build():
#...
kolla = KollaWorker(conf)
#...
- 初始化构建过程中的各种变量(kolla/image/build.py):
class KollaWorker(object):
def __init__(self, conf):
self.conf = conf
self.images_dir = self._get_images_dir()
self.registry = conf.registry
if self.registry:
self.namespace = self.registry + '/' + conf.namespace
else:
self.namespace = conf.namespace
self.base = conf.base
self.base_tag = conf.base_tag
self.install_type = conf.install_type
self.tag = conf.tag
self.base_arch = conf.base_arch
self.images = list()
rpm_setup_config = ([repo_file for repo_file in
conf.rpm_setup_config if repo_file is not None])
self.rpm_setup = self.build_rpm_setup(rpm_setup_config)
rh_base = ['centos', 'oraclelinux', 'rhel']
rh_type = ['source', 'binary', 'rdo', 'rhos']
deb_base = ['ubuntu', 'debian']
deb_type = ['source', 'binary']
if not ((self.base in rh_base and self.install_type in rh_type) or
(self.base in deb_base and self.install_type in deb_type)):
raise exception.KollaMismatchBaseTypeException(
'{} is unavailable for {}'.format(self.install_type, self.base)
)
if self.install_type == 'binary':
self.install_metatype = 'rdo'
elif self.install_type == 'source':
self.install_metatype = 'mixed'
elif self.install_type == 'rdo':
self.install_type = 'binary'
self.install_metatype = 'rdo'
elif self.install_type == 'rhos':
self.install_type = 'binary'
self.install_metatype = 'rhos'
else:
raise exception.KollaUnknownBuildTypeException(
'Unknown install type'
)
self.image_prefix = self.base + '-' + self.install_type + '-'
self.regex = conf.regex
self.image_statuses_bad = dict()
self.image_statuses_good = dict()
self.image_statuses_unmatched = dict()
self.image_statuses_skipped = dict()
self.maintainer = conf.maintainer
docker_kwargs = docker.utils.kwargs_from_env()
self.dc = docker.APIClient(version='auto', **docker_kwargs)
设置工作目录
- 使用
KollaWorker
初始化工作目录(kolla/image/build.py):
#...
def run_build():
#...
kolla.setup_working_dir()
#...
- 如果设置了
work_dir
则在指定路径中创建docker
目录作为工作目录,否则使用时间戳生成一个临时目录,最后把docker
目录里的所有文件复制到工作目录(kolla/image/build.py):
class KollaWorker(object):
#...
def setup_working_dir(self):
"""Creates a working directory for use while building."""
if self.conf.work_dir:
self.working_dir = os.path.join(self.conf.work_dir, 'docker')
else:
ts = time.time()
ts = datetime.datetime.fromtimestamp(ts).strftime(
'%Y-%m-%d_%H-%M-%S_')
self.temp_dir = tempfile.mkdtemp(prefix='kolla-' + ts)
self.working_dir = os.path.join(self.temp_dir, 'docker')
self.copy_dir(self.images_dir, self.working_dir)
for dir in self.conf.docker_dir:
self.copy_dir(dir, self.working_dir)
self.copy_apt_files()
LOG.debug('Created working dir: %s', self.working_dir)
查找Dockerfile.j2
- 使用
KollaWorker
对象查找所有的Dockerfile.j2
(kolla/image/build.py):
#...
def run_build():
#...
kolla.find_dockerfiles()
#...
- 生成所有
Dockerfile.j2
文件的列表(kolla/image/build.py):
class KollaWorker(object):
#...
def find_dockerfiles(self):
"""Recursive search for Dockerfiles in the working directory."""
self.docker_build_paths = list()
path = self.working_dir
filename = 'Dockerfile.j2'
for root, dirs, names in os.walk(path):
if filename in names:
self.docker_build_paths.append(root)
LOG.debug('Found %s', root.split(self.working_dir)[1])
LOG.debug('Found %d Dockerfiles', len(self.docker_build_paths))
生成Dockerfile
- 使用
KollaWorker
对象生成Dockerfile
(kolla/image/build.py):
#...
def run_build():
#...
kolla.create_dockerfiles()
#...
- 首先生成各种变量配置的列表,最后使用
Dockerfile.j2
模板文件生成Dockerfile
(kolla/image/build.py):
class KollaWorker(object):
#...
def create_dockerfiles(self):
kolla_version = version.version_info.cached_version_string()
supported_distro_release = common_config.DISTRO_RELEASE.get(
self.base)
for path in self.docker_build_paths:
template_name = "Dockerfile.j2"
image_name = path.split("/")[-1]
ts = time.time()
build_date = datetime.datetime.fromtimestamp(ts).strftime(
'%Y%m%d')
values = {'base_distro': self.base,
'base_image': self.conf.base_image,
'base_distro_tag': self.base_tag,
'base_arch': self.base_arch,
'supported_distro_release': supported_distro_release,
'install_metatype': self.install_metatype,
'image_prefix': self.image_prefix,
'install_type': self.install_type,
'namespace': self.namespace,
'tag': self.tag,
'maintainer': self.maintainer,
'kolla_version': kolla_version,
'image_name': image_name,
'users': self.get_users(),
'rpm_setup': self.rpm_setup,
'build_date': build_date}
env = jinja2.Environment( # nosec: not used to render HTML
loader=jinja2.FileSystemLoader(self.working_dir))
env.filters.update(self._get_filters())
env.globals.update(self._get_methods())
tpl_path = os.path.join(
os.path.relpath(path, self.working_dir),
template_name)
template = env.get_template(tpl_path)
if self.conf.template_override:
tpl_dict = self._merge_overrides(self.conf.template_override)
template_name = os.path.basename(tpl_dict.keys()[0])
values['parent_template'] = template
env = jinja2.Environment( # nosec: not used to render HTML
loader=jinja2.DictLoader(tpl_dict))
env.filters.update(self._get_filters())
env.globals.update(self._get_methods())
template = env.get_template(template_name)
content = template.render(values)
content_path = os.path.join(path, 'Dockerfile')
with open(content_path, 'w') as f:
LOG.debug("Rendered %s into:", tpl_path)
LOG.debug(content)
f.write(content)
LOG.debug("Wrote it to %s", content_path)
(kolla/image/build.py):
from kolla.template import filters as jinja_filters
#...
class KollaWorker(object):
#...
def _get_filters(self):
filters = {
'customizable': jinja_filters.customizable,
}
return filters
def _get_methods(self):
"""Mapping of available Jinja methods.
return a dictionary that maps available function names and their
corresponding python methods to make them available in jinja templates
"""
return {
'debian_package_install': jinja_methods.debian_package_install,
}
(kolla/template/filters.py):
from jinja2 import contextfilter
@contextfilter
def customizable(context, val_list, call_type):
name = context['image_name'].replace("-", "_") + "_" + call_type + "_"
if name + "override" in context:
return context[name + "override"]
if name + "append" in context:
val_list.extend(context[name + "append"])
if name + "remove" in context:
for removal in context[name + "remove"]:
if removal in val_list:
val_list.remove(removal)
return val_list
非构建功能
- 如果是临时构建(仅生成
Dockerfile
)、保存依赖关系、显示镜像列表、显示镜像依赖关系,则处理完后直接返回。(kolla/image/build.py):
def run_build():
#...
if conf.template_only:
LOG.info('Dockerfiles are generated in %s', kolla.working_dir)
return
# We set the atime and mtime to 0 epoch to preserve allow the Docker cache
# to work like we want. A different size or hash will still force a rebuild
kolla.set_time()
if conf.save_dependency:
kolla.build_image_list()
kolla.find_parents()
kolla.filter_images()
kolla.save_dependency(conf.save_dependency)
LOG.info('Docker images dependency are saved in %s',
conf.save_dependency)
return
if conf.list_images:
kolla.build_image_list()
kolla.find_parents()
kolla.filter_images()
kolla.list_images()
return
if conf.list_dependencies:
kolla.build_image_list()
kolla.find_parents()
kolla.filter_images()
kolla.list_dependencies()
return
- 生成镜像列表:
def build_image_list(self):
def process_source_installation(image, section):
installation = dict()
# NOTE(jeffrey4l): source is not needed when the type is None
if self.conf._get('type', self.conf._get_group(section)) is None:
if image.parent_name is None:
LOG.debug('No source location found in section %s',
section)
else:
installation['type'] = self.conf[section]['type']
installation['source'] = self.conf[section]['location']
installation['name'] = section
if installation['type'] == 'git':
installation['reference'] = self.conf[section]['reference']
return installation
all_sections = (set(six.iterkeys(self.conf._groups)) |
set(self.conf.list_all_sections()))
for path in self.docker_build_paths:
# Reading parent image name
with open(os.path.join(path, 'Dockerfile')) as f:
content = f.read()
image_name = os.path.basename(path)
canonical_name = (self.namespace + '/' + self.image_prefix +
image_name + ':' + self.tag)
parent_search_pattern = re.compile(r'^FROM.*$', re.MULTILINE)
match = re.search(parent_search_pattern, content)
if match:
parent_name = match.group(0).split(' ')[1]
else:
parent_name = ''
del match
image = Image(image_name, canonical_name, path,
parent_name=parent_name,
logger=make_a_logger(self.conf, image_name),
docker_client=self.dc)
if self.install_type == 'source':
# NOTE(jeffrey4l): register the opts if the section didn't
# register in the kolla/common/config.py file
if image.name not in self.conf._groups:
self.conf.register_opts(common_config.get_source_opts(),
image.name)
image.source = process_source_installation(image, image.name)
for plugin in [match.group(0) for match in
(re.search('^{}-plugin-.+'.format(image.name),
section) for section in
all_sections) if match]:
try:
self.conf.register_opts(
common_config.get_source_opts(),
plugin
)
except cfg.DuplicateOptError:
LOG.debug('Plugin %s already registered in config',
plugin)
image.plugins.append(
process_source_installation(image, plugin))
for addition in [
match.group(0) for match in
(re.search('^{}-additions-.+'.format(image.name),
section) for section in all_sections) if match]:
try:
self.conf.register_opts(
common_config.get_source_opts(),
addition
)
except cfg.DuplicateOptError:
LOG.debug('Addition %s already registered in config',
addition)
image.additions.append(
process_source_installation(image, addition))
self.images.append(image)
- 查找每个镜像的依赖关系:
def find_parents(self):
"""Associate all images with parents and children."""
sort_images = dict()
for image in self.images:
sort_images[image.canonical_name] = image
for parent_name, parent in sort_images.items():
for image in sort_images.values():
if image.parent_name == parent_name:
parent.children.append(image)
image.parent = parent
- 根据命令参数或者
profile
过滤掉不需要构建或处理的镜像:
def filter_images(self):
"""Filter which images to build."""
filter_ = list()
if self.regex:
filter_ += self.regex
elif self.conf.profile:
for profile in self.conf.profile:
if profile not in self.conf.profiles:
self.conf.register_opt(cfg.ListOpt(profile,
default=[]),
'profiles')
if len(self.conf.profiles[profile]) == 0:
msg = 'Profile: {} does not exist'.format(profile)
raise ValueError(msg)
else:
filter_ += self.conf.profiles[profile]
if filter_:
patterns = re.compile(r"|".join(filter_).join('()'))
for image in self.images:
if image.status in (STATUS_MATCHED, STATUS_SKIPPED):
continue
if re.search(patterns, image.name):
image.status = STATUS_MATCHED
while (image.parent is not None and
image.parent.status not in (STATUS_MATCHED,
STATUS_SKIPPED)):
image = image.parent
if self.conf.skip_parents:
image.status = STATUS_SKIPPED
elif (self.conf.skip_existing and
image.in_docker_cache()):
image.status = STATUS_SKIPPED
else:
image.status = STATUS_MATCHED
LOG.debug('Image %s matched regex', image.name)
else:
image.status = STATUS_UNMATCHED
else:
for image in self.images:
image.status = STATUS_MATCHED
- 保存依赖关系:
def save_dependency(self, to_file):
try:
import graphviz
except ImportError:
LOG.error('"graphviz" is required for save dependency')
raise
dot = graphviz.Digraph(comment='Docker Images Dependency')
dot.body.extend(['rankdir=LR'])
for image in self.images:
if image.status not in [STATUS_MATCHED]:
continue
dot.node(image.name)
if image.parent is not None:
dot.edge(image.parent.name, image.name)
with open(to_file, 'w') as f:
f.write(dot.source)
- 显示镜像列表:
def list_images(self):
for count, image in enumerate([
image for image in self.images if image.status == STATUS_MATCHED
]):
print(count + 1, ':', image.name)
- 显示镜像依赖:
def list_dependencies(self):
match = False
for image in self.images:
if image.status in [STATUS_MATCHED]:
match = True
if image.parent is None:
base = image
if not match:
print('Nothing matched!')
return
def list_children(images, ancestry):
children = six.next(iter(ancestry.values()))
for image in images:
if image.status not in [STATUS_MATCHED]:
continue
if not image.children:
children.append(image.name)
else:
newparent = {image.name: []}
children.append(newparent)
list_children(image.children, newparent)
ancestry = {base.name: []}
list_children(base.children, ancestry)
json.dump(ancestry, sys.stdout, indent=2)
创建任务队列
- 使用
KollaWorker
构建一个任务队列,然后根据配置的构建和push
线程数,创建多个队列,执行其中的任务(kolla/image/build.py):
def run_build():
#...
push_queue = six.moves.queue.Queue()
queue = kolla.build_queue(push_queue)
workers = []
with join_many(workers):
try:
for x in six.moves.range(conf.threads):
worker = WorkerThread(conf, queue)
worker.setDaemon(True)
worker.start()
workers.append(worker)
for x in six.moves.range(conf.push_threads):
worker = WorkerThread(conf, push_queue)
worker.setDaemon(True)
worker.start()
workers.append(worker)
# sleep until queue is empty
while queue.unfinished_tasks or push_queue.unfinished_tasks:
time.sleep(3)
# ensure all threads exited happily
push_queue.put(WorkerThread.tombstone)
queue.put(WorkerThread.tombstone)
except KeyboardInterrupt:
for w in workers:
w.should_stop = True
push_queue.put(WorkerThread.tombstone)
queue.put(WorkerThread.tombstone)
raise
- 使用
KollaWorker
对象的build_queue
方法,为每个镜像创建一个BuildTask
对象,并加入到任务队列中(kolla/image/build.py):
class KollaWorker(object):
#...
def build_queue(self, push_queue):
"""Organizes Queue list.
Return a list of Queues that have been organized into a hierarchy
based on dependencies
"""
self.build_image_list()
self.find_parents()
self.filter_images()
queue = six.moves.queue.Queue()
for image in self.images:
if image.status == STATUS_UNMATCHED:
# Don't bother queuing up build tasks for things that
# were not matched in the first place... (not worth the
# effort to run them, if they won't be used anyway).
continue
if image.parent is None:
queue.put(BuildTask(self.conf, image, push_queue))
LOG.info('Added image %s to queue', image.name)
return queue
- 每个线程使用一个
WorkerThread
对象从任务队列中循环取出每个镜像关联的BuildTask
对象,根据设置的重试次数,执行他的run
方法进行镜像构建(kolla/image/build.py):
class WorkerThread(threading.Thread):
"""Thread that executes tasks until the queue provides a tombstone."""
#: Object to be put on worker queues to get them to die.
tombstone = object()
def __init__(self, conf, queue):
super(WorkerThread, self).__init__()
self.queue = queue
self.conf = conf
self.should_stop = False
def run(self):
while not self.should_stop:
task = self.queue.get()
if task is self.tombstone:
# Ensure any other threads also get the tombstone.
self.queue.put(task)
break
try:
for attempt in six.moves.range(self.conf.retries + 1):
if self.should_stop:
break
LOG.info("Attempt number: %s to run task: %s ",
attempt + 1, task.name)
try:
task.run()
if task.success:
break
except Exception:
LOG.exception('Unhandled error when running %s',
task.name)
# try again...
task.reset()
if task.success and not self.should_stop:
for next_task in task.followups:
LOG.info('Added next task %s to queue',
next_task.name)
self.queue.put(next_task)
finally:
self.queue.task_done()
执行构建任务
BuildTask
对象初始化和run
方法,最终调用builder
方法进行镜像构建(kolla/image/build.py):
class BuildTask(DockerTask):
"""Task that builds out an image."""
def __init__(self, conf, image, push_queue):
super(BuildTask, self).__init__()
self.conf = conf
self.image = image
self.push_queue = push_queue
self.nocache = not conf.cache
self.forcerm = not conf.keep
self.logger = image.logger
@property
def name(self):
return 'BuildTask(%s)' % self.image.name
def run(self):
self.builder(self.image)
if self.image.status in (STATUS_BUILT, STATUS_SKIPPED):
self.success = True
- 构建时首先获取和生成源码包,最后使用
Docker
构建镜像(kolla/image/build.py):
class BuildTask(DockerTask):
#...
def builder(self, image):
def make_an_archive(items, arcname, item_child_path=None):
if not item_child_path:
item_child_path = arcname
archives = list()
items_path = os.path.join(image.path, item_child_path)
for item in items:
archive_path = self.process_source(image, item)
if image.status in STATUS_ERRORS:
raise ArchivingError
archives.append(archive_path)
if archives:
for archive in archives:
with tarfile.open(archive, 'r') as archive_tar:
archive_tar.extractall(path=items_path)
else:
try:
os.mkdir(items_path)
except OSError as e:
if e.errno == errno.EEXIST:
self.logger.info(
'Directory %s already exist. Skipping.',
items_path)
else:
self.logger.error('Failed to create directory %s: %s',
items_path, e)
image.status = STATUS_CONNECTION_ERROR
raise ArchivingError
arc_path = os.path.join(image.path, '%s-archive' % arcname)
with tarfile.open(arc_path, 'w') as tar:
tar.add(items_path, arcname=arcname)
return len(os.listdir(items_path))
self.logger.debug('Processing')
if image.status == STATUS_SKIPPED:
self.logger.info('Skipping %s' % image.name)
return
if image.status == STATUS_UNMATCHED:
return
if (image.parent is not None and
image.parent.status in STATUS_ERRORS):
self.logger.error('Parent image error\'d with message "%s"',
image.parent.status)
image.status = STATUS_PARENT_ERROR
return
image.status = STATUS_BUILDING
self.logger.info('Building')
if image.source and 'source' in image.source:
self.process_source(image, image.source)
if image.status in STATUS_ERRORS:
return
if self.conf.install_type == 'source':
try:
plugins_am = make_an_archive(image.plugins, 'plugins')
except ArchivingError:
self.logger.error(
"Failed turning any plugins into a plugins archive")
return
else:
self.logger.debug(
"Turned %s plugins into plugins archive",
plugins_am)
try:
additions_am = make_an_archive(image.additions, 'additions')
except ArchivingError:
self.logger.error(
"Failed turning any additions into a additions archive")
return
else:
self.logger.debug(
"Turned %s additions into additions archive",
additions_am)
# Pull the latest image for the base distro only
pull = self.conf.pull if image.parent is None else False
buildargs = self.update_buildargs()
try:
for response in self.dc.build(path=image.path,
tag=image.canonical_name,
nocache=not self.conf.cache,
rm=True,
pull=pull,
forcerm=self.forcerm,
buildargs=buildargs):
stream = json.loads(response.decode('utf-8'))
if 'stream' in stream:
for line in stream['stream'].split('\n'):
if line:
self.logger.info('%s', line)
if 'errorDetail' in stream:
image.status = STATUS_ERROR
self.logger.error('Error\'d with the following message')
for line in stream['errorDetail']['message'].split('\n'):
if line:
self.logger.error('%s', line)
return
except docker.errors.DockerException:
image.status = STATUS_ERROR
self.logger.exception('Unknown docker error when building')
except Exception:
image.status = STATUS_ERROR
self.logger.exception('Unknown error when building')
else:
image.status = STATUS_BUILT
self.logger.info('Built')
- 获取和生成源码包时,根据源码来源的不同使用不同方法,最终的源码包名字都是
镜像名
加上-archive
(如openstack-base-archive
),在生成镜像时复制解压进镜像里(kolla/image/build.py):
class BuildTask(DockerTask):
#...
def process_source(self, image, source):
dest_archive = os.path.join(image.path, source['name'] + '-archive')
if source.get('type') == 'url':
self.logger.debug("Getting archive from %s", source['source'])
try:
r = requests.get(source['source'], timeout=self.conf.timeout)
except requests_exc.Timeout:
self.logger.exception(
'Request timed out while getting archive from %s',
source['source'])
image.status = STATUS_ERROR
return
if r.status_code == 200:
with open(dest_archive, 'wb') as f:
f.write(r.content)
else:
self.logger.error(
'Failed to download archive: status_code %s',
r.status_code)
image.status = STATUS_ERROR
return
elif source.get('type') == 'git':
clone_dir = '{}-{}'.format(dest_archive,
source['reference'].replace('/', '-'))
if os.path.exists(clone_dir):
self.logger.info("Clone dir %s exists. Removing it.",
clone_dir)
shutil.rmtree(clone_dir)
try:
self.logger.debug("Cloning from %s", source['source'])
git.Git().clone(source['source'], clone_dir)
git.Git(clone_dir).checkout(source['reference'])
reference_sha = git.Git(clone_dir).rev_parse('HEAD')
self.logger.debug("Git checkout by reference %s (%s)",
source['reference'], reference_sha)
except Exception as e:
self.logger.error("Failed to get source from git", image.name)
self.logger.error("Error: %s", e)
# clean-up clone folder to retry
shutil.rmtree(clone_dir)
image.status = STATUS_ERROR
return
with tarfile.open(dest_archive, 'w') as tar:
tar.add(clone_dir, arcname=os.path.basename(clone_dir))
elif source.get('type') == 'local':
self.logger.debug("Getting local archive from %s",
source['source'])
if os.path.isdir(source['source']):
with tarfile.open(dest_archive, 'w') as tar:
tar.add(source['source'],
arcname=os.path.basename(source['source']))
else:
shutil.copyfile(source['source'], dest_archive)
else:
self.logger.error("Wrong source type '%s'", source.get('type'))
image.status = STATUS_ERROR
return
# Set time on destination archive to epoch 0
os.utime(dest_archive, (0, 0))
return dest_archive