First all VM depending on the hypervisor, but docker is more like OS virtualization, it creates virtual OS and assigns one to each container and your applications run on it. So the docker approach is more light-weight.
First of all, you need to have the Docker engine installed and running.
Check your version and show it works:
$ docker version # returns your engine version
$ docker info # return information on the contarins you have.
$ docker run hello-world # use hello-world image and create container and run it.
$ docker ps # to see running running and stopped containers
$ docker images # to see the infor about the image, e.g. id, size, etc
$ docker rmi hello-world:latest # rmi is TLA for remove, remove image
$ docker start <container>
$ docker stop <container>
$ docker rm <container> # remove container
docker version, we notice there are two engines running on your machine, one is a client, one is a daemon (server). when we type
docker run hello-world, we actually enter the command on the client engine side, aka, the client makes API calls to the daemon. Daemon implements the Docker Remote API, the first daemon will check to see whether if there’s a copy of hello-world image available, if not search at the docker hub, if available, it will pull the image (aka. make a local copy) and create a local containing using the image.
You can think, the images as a stopped container, and the containers as running images.
One-liner to stop/remove all of Docker containers:
$ docker stop $(docker ps -a -q)
$ docker rm $(docker ps -a -q)
One-liner to remove all of Docker images:
$ docker rmi $(docker images -a -q)
docker run -d --name web -p 80:8080 billchenxi/bill-image
-d: detach mode, running in the background
--name: a unique name for the container
-p 80:8080: map port 80 on the Docker host to port 8080 inside of the container. So in the browser https://<website>:80 will map to 8080 inside the container.
billchenxi/bill-image: the image to use, the first part before
/is the top-level images, the rest is the second level images. Top-level is stored in the root of Hub, second-level is stored in their own namespace
Interactive with Container:
docker run -it --name temp ubuntu:latest /bin/bash
The prompter will change:
Since docker container is designed to be lightweight, so it doesn’t come with any application such as
root@09fb78600fe7:/# vim /etc/hostsbash: vim: command not found
ls of course, is included:
root@09fb78600fe7:/# lsbin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
How to EXIT the container? You cannot use
# exit to exit, it will exit the bash, then exit the container.
root@09fb78600fe7:/# ps -elfF S UID PID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD4 S root 1 0 0 80 0 - 4627 - 15:40 pts/0 00:00:00 /bin/bash4 R root 13 1 0 80 0 - 8601 - 15:45 pts/0 00:00:00 ps -elf
root@09fb78600fe7:/# toptop - 15:45:51 up 1:27, 0 users, load average: 0.00, 0.02, 0.00Tasks: 2 total, 1 running, 1 sleeping, 0 stopped, 0 zombie%Cpu(s): 0.1 us, 0.1 sy, 0.0 ni, 99.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 stKiB Mem : 3066348 total, 1638640 free, 301040 used, 1126668 buff/cacheKiB Swap: 1048572 total, 1048572 free, 0 used. 2617816 avail MemPID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND1 root 20 0 18508 3488 3068 S 0.0 0.1 0:00.04 bash14 root 20 0 36636 3168 2660 R 0.0 0.1 0:00.00 top
You see there’s only
bash is running. The container is often single process constructs, it runs only the
bash, the other process such as
top are the temporary processes that
bash created (forked) when we ran them.
The correct way to exit is pressing
The oneliner is: Swarm is managing the nodes and service is managing the containers run on the nodes. They are different.
From Docker 1.12, a swarm (series of docker engines) is a true native cluster. So cluster and swarm are interchangeable. But swarm mode is optional for the engines. Manager nodes maintain the swarm, 3–5 is recommended, only one is the leader. Worker nodes execute tasks only. Services, declarative way of running and scaling tasks, require swarm mode.
$ docker service create --name <app-front-end> --replicas 10
The above command is to request 10 instances of the containers or tasks to run. The reason to use service is that the docker will make sure there are 10 containers running at any time, if one is down, the docker will create a new one.
$ docker swarm init --advertise-addr 192.168.65.3:2377 --listen-addr 192.168.65.3:2377Swarm initialized: current node (djybu0go53g0rrv9r0c3yvx2w) is now a manager.To add a worker to this swarm, run the following command:docker swarm join --token SWMTKN-1-4jqzd8tis9kzjcvbc949m9m73tg1z3j789994g7ojmm4muo94y-8s265qaxf7yl8tkhwsxnwodbc 192.168.65.3:2377To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
--advertise-addr: tell docker, no matter how many addresses this machine’s got, this is the one to use for swarm related tasks. In real-life, a server could have many IPs, so this is the one for docker swarm.
--listen-addr: the nodes listen on from the manager. All the nodes either on the same network or not, they need to be able to reach this IP.
:2377: native docker engine port: 2375, secure engine port 2376, and swam port is 2377.
Of course, the IP address should be your cloud nodes address not your localhost.
docker swarm join --token SWMTKN-1-4jqzd8tis9kzjcvbc949m9m73tg1z3j789994g7ojmm4muo94y-8s265qaxf7yl8tkhwsxnwodbc 192.168.65.3:2377
The above command from above which is part of the return from
docker init is the exact command you run in order to add a worker to this swarm, if you forget, here is what you can do.
$ docker swarm join-token workerTo add a worker to this swarm, run the following command:docker swarm join --token SWMTKN-1-4jqzd8tis9kzjcvbc949m9m73tg1z3j789994g7ojmm4muo94y-8s265qaxf7yl8tkhwsxnwodbc 192.168.65.3:2377
And to add a manager:
$ docker swarm join-token managerTo add a manager to this swarm, run the following command:docker swarm join --token SWMTKN-1-4jqzd8tis9kzjcvbc949m9m73tg1z3j789994g7ojmm4muo94y-e1tr56q56mlk27hg5mdgkut6w 192.168.65.3:2377
The token is the only thing that differentiates the managers and workers.
$ docker node lsID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSIONdjybu0go53g0rrv9r0c3yvx2w * docker-desktop Ready Active Leader 19.03.5
Things to know, a manager is also a worker, and the workers can be promoted to managers. Only after you created the nodes, you can deploy service to them.
We can deploy service on the nodes now.
$ docker service create --name temp -p 8080:8080 --replicas 5 <docker-image>
$ docker service ls # show how many service is running
$ docker service ps temp # shows the tasks (nodes) of a service.
So service is like an “application” that lives on a node, and swarm allows the users to access your application from any node address, even the node doesn’t have the container. Docker is calling this native container-aware load balancer the “routing mesh”.
2. Only Numpy: Implementing Convolutional Neural Network using Numpy
3. TensorFlow Object Detection API tutorial
4. Artificial Intelligence Conference
- Remove one node, the service is still available
- Scaling up and down
$ docker service scale temp=10
$ docker service update --replicas 10 temp # same as above.
$ docker service rm temp # delete the service.
If we have 5 nodes, and after the above command, each node should have 2 containers running.
$ docker network create -d overlay <network-name>
$ docker service create --name <service-name> --network <network-name> -p 80:80 --replicas 12 <container-image:v1>
If we want to update the service to
v2, which is a newer version docker container image. We can
$ docker service update --image <container-image:v2> --update-parallelism 2 --update-delay 10s <service-name>
Here we are updating 2 every time and delay 10s each time. Let’s check:
$ docker service ps <service-name> | grep :v2
This line will only show the updated state of the newer version of the service.
$ docker service inspect --pretty <service-name>
We will see the update status should be completed. 🙂