Docker study notes-Part 5: container orchestration
Posted Jun 28, 2020 • 5 min read
cc teacher's latest senior architect course has finally been set, and the course officially starts on July 6th. Now 618 activities, free unlimited update course is the biggest selling point, which contains the current mainstream architecture and the latest technology, It is said that everything from theory to application is done, such as Netty, Redis, Kafka, Zookeeper, Dubbo, Nginx + openResty + kong + Lua, ElasticSearch and other technologies commonly used in interviews are deep in the source code level. Students of their own technology, the details of the course are explained in detail on the homepage of the private school. Interested friends can go to [ private school online ]to learn
1.1 Docker container orchestration
1.1.1 Introduction to Choreography
Docker's best practice recommendation:Only one process is running in a container. The actual application will be composed of multiple components. To run multiple components, you need to run multiple containers, which requires orchestration of the multiple containers.
The so-called orchestration:mainly the process of automatic configuration, collaboration and management of multiple docker containers. Docker provides docker-compose tool to achieve.
1.1.2 Introduction to Docker-compose
Compose is a tool used to define and run one or more container applications. It is developed using Python and defines multiple container applications through yml files. It is very suitable for deploying one or more containers in a single machine environment and automatically multiple containers. Related.
In fact, what docker-compose does is equivalent to parse the configuration file, and then execute a series of docker commands according to the configuration.
1.1.3 Docker-compose installation
Official installation document:
Docker-compose basic example
1:Prepare the image to start, although you can build the image directly in compose, it is recommended to prepare it first
3:Then just docker-compose up -d, just start
4:docker-compose.yml example is as follows:
version:'2' services: mysqldb: image:'mysql:latest' environment: -MYSQL_ROOT_PASSWORD=cc volumes: -/ccuse/programes/mysqldata:/var/lib/mysql privileged:true web: image:'cctomcat:9.0' ports: -"9080:8080" volumes: -/ccuse/programes/tomcat9docker/webapps/test:/usr/local/tomcat/webapps/test privileged:true links: -mysqldb:dblink
1.1.4 Configuration of Docker-compose yml file
1:A standard configuration file can contain three parts:version, services, and networks. For detailed reference guides, see the official website:[
2:There are currently three versions of version 1, 2, and 3
3:The common configurations of srvices are:
(1) Service name:used to represent a service, customized
(2) image:Specify the mirror name or mirror ID of the service. If the image does not exist locally, Compose will try to pull the image. Build and image must use one.
In addition to the specified image, the service can also be based on a Dockerfile, and the build task is executed when starting with up. This build label is build, which can specify the path of the folder where the Dockerfile is located. Compose will use it to automatically build this image, and then use this image to start the service container. If you specify both image and build tags, Compose will build the image and name the image after the image.
Similar to the ARG instruction in the Dockerfile, you can specify environment variables during the build process, and cancel after the build is successful
(5) command:Use command to override the default command executed after the container is started
(6) container_name:custom container name
(7) links:Specify the connection with other containers, the same effect as the Docker client --link
(8) Volumes:Mount the path or file on the host to the container
(9) ports:map the port of the host to a port of the container
(10) environment:set environment variables, like the ENV instruction in the Dockerfile, the variables will always be saved in the image and container, similar to the effect of docker run -e
(11) privileged:set the permissions of the mounting directory
(12) depends_on:The order of starting the general project container is required, you can use depends_on to solve the problem of container dependency and startup sequence.
There are many more, you can check the official documentation.
Docker-compose network configuration
In addition to using --link for communication between containers, it is now more recommended to use a custom network and then use the service name to communicate. Each custom network can be configured with many things, including the settings of the network driver and network address range. E.g: networks: frontend: backend:
1:You will see that the frontend and backend are empty, which means that everything uses the default. In other words, in the stand-alone environment, it will mean using the bridge driver; in the Swarm environment, the overlay driver is used, and the address The scope is entirely up to the Docker engine.
2:Then in each service configuration, there is also a network, which is used to specify which networks the service is connected to, you can specify multiple, for example:
services: nginx: ... networks: -frontend web: ... networks: -frontend -backend mysql: ... networks: -backend
3:Containers connected to the same network can be interconnected; containers of different networks will be isolated.
4:Containers on the same network can use the service name to access each other
5:Add the network configuration to the previous example, as follows:
version:'2' services: mysqldb: image:'mysql:latest' environment: -MYSQL_ROOT_PASSWORD=cc volumes: -/ccuse/programes/mysqldata:/var/lib/mysql privileged:true networks: -frontend web: image:'cctomcat:9.0' ports: -"9080:8080" volumes: -/ccuse/programes/tomcat9docker/webapps/test:/usr/local/tomcat/webapps/test privileged:true links: -mysqldb:dblink networks: -frontend networks: frontend: backend:
1.2 Introduction to Docker cluster management tools
1.2.1 Tools natively provided by Docker
1:compose:a tool used to assemble multiple containers to form an application
Machine is a command-line tool that simplifies Docker installation, supports multi-platform installation of docker, and can conveniently install docker in various environments, such as notebooks, cloud platforms, and data centers. The machine is essentially a combination of a docker host and a configured docker client.
The container cluster management tool provided by the docker community can convert a system composed of multiple docker hosts into a single virtual docker host
Google's open source Kubernetes is used to automatically deploy, expand, and operate application containers across host clusters, providing a container-centric infrastructure. Using Kubernetes can easily manage containers in multiple Docker hosts. The main functions are as follows:
1:Abstract multiple Docker hosts as a resource and manage containers in a cluster, including tasks scheduling, resource management, elastic scaling, rolling upgrade and other functions.
2:Use the orchestration system to quickly build a container cluster, provide load balancing, and solve the problems of direct container association and communication
3:Automatically manage and repair containers. Simply put, for example, create a cluster with ten containers in it. If a container is abnormally closed, it will try to restart or reallocate the container. Always ensure that ten containers are running.
4:Similar mainstream tools include:Apache Mesos, etc.