Thursday, 26 December 2019

Docker



  • Courtesy: https://www.linkedin.com/learning/docker-for-java-developers
  • Github code base: https://github.com/arun-gupta/docker-for-java
  • Introduction:
  • Docker is an open-source project and a company - basically started with intention to build containers for projects and applications.
  • It's literally on github : https://github.com/docker
  • It's used to build, deploy and run containers for software applications.
  • Containers are basically templates (with fixed format) which docker expects all applications to follow.
  • Getting Started:
  • Docker is basically a 3 main step process :
    • Build , Ship , Run
  • Build - For build, Docker defines a standard Docker Image. When you build your application, u r saying this is how my application component and configuration is going to look like. I am going to package it together and call it as a Docker Image.
  • Ship - Once u have built ur docker image, u want to share it with someone, or u want to run it in production or u want to give it to someone who can try with the same image. For that, u need shipping. So by default, there is docker hub (https://hub.docker.com). So once u have built the image, u can share it on docker hub. You can see on docker hub how ur image looks like, and then you can download the image from docker hub. By default its a docker hub (public), but there's also an option of private registry (inside firewall) available with docker enterprise versions.
  • Run - Once you have shared the application (ex: on docker hub), you need to run the application using docker container.
  • Summary : Docker is an application delivery technology. It helps us to build an application with docker image, ship an application with docker hub, run an application as docker container. And also run multiple instances of the container to avoid single point of failure. 

  • Pre-requisites:
    • Windows - Windows 10 Pro - 64 bit
    • Linux - all flavours almost
    • Admin privileges on the machine
  • In Docker, you create an image on a variety of operating systems. And you have docker engine or docker host.Docker engine understands the docker image format, takes that image format and runs the container for you. So once u have image ready, u can run it on multiple OS.
  • Docker is basically developed on linux, so on linux it runs directly. On Mac and Windows, u need to run something called a virtual machine.
  • Docker has 3 entities:
    • Docker client
    • Docker host/engine
    • Registry
  • You can co-relate the above entities with Build, Ship and Run entities.
  • Docker client -> Build
  • Docker host / engine -> Ship
  • Docker registry -> Run
  • Assuming that you have docker client installed on your machine, then docker client is configured to talk to a docker host. It can be a single docker host or a cluster of docker hosts.
  • Docker client gives a command (ex: docker run) to run the docker container. So as docker client is configured to talk to docker host, docker host receives the cmd from docker client.
  • On docker host, there is a docker daemon running which is listening for that command. Docker daemon is listening on a specific port for that REST call made by docker client.
  • Now Docker says, in order to run a container, I need a docker image.
  • By default, docker host is configured to talk to a registry which is ideally docker hub.
  • So, docker host asks the registry that do u have this image, as I need to run this container.
  • Once found, docker host downloads the image on docker host.
  • Once image is downloaded on docker host, u can run as many instances of the container as many times as u want.
  • To start with Docker, download docker from https://www.docker.com/get-started



  • Courtesy: https://www.youtube.com/watch?v=FlSup_eelYE
  • Download and Install docker engine on your machine. (Windows/Mac/Linux)
  • Start the docker service (engine). You will get a message 'Docker is running'.
  • So, if you create a docker image, it will create a docker container on my local machine
  • Download a sample spring boot application, import it in your eclipse
  • Create a file named 'Dockerfile' in your project directly. Make the entries
  • Open the docker terminal on local
  • Verify docker is running with : docker -v command
  • So docker engine is running on your local now. So we are going to run this docker to deploy our jar into an image. And we are going to start that image.
  • So to create an image first, we need to build an image
  • So to create docker image, go to project root directory 'docker-spring-boot'  in the terminal where the Dockerfile lies
  • Run the command: "docker build -f Dockerfile -t docker-spring-boot ."
  • Bascially it means u r building a docker image by giving reference to your Dockerfile and giving a tag name 'docker-spring-boot' which will be available to use on all platforms
  • It executes all steps in Dockerfile. It downloads the openjdk image from docker hub onto your local machine ; then it adds your jar file into the container under the root dir itself ; and finally it will expose on port 8085 ; and also give entrypoint to it i.e main command to run when the image is up.
  • Now, to verify if image is successfully created, run command: docker images. Your image name shud show in the list.
  • Now u need to run this image with command: docker run -p 8085:8085 docker-spring-boot
  • Basically it means run the docker image , push the application on port 8085 , and inside the container also I have port 8085. That is, on my machine app runs on port 8085 and also I am mapping that port to the container's port 8085. And finally give the img name at the end of the command.
  • When u run this, u can see spring boot logo and logs in the console.
  • So it has basically created a container for you.
  • Steps sequence: Build your JAR, docker build, docker run
  • Go to ur local browser: http://localhost:8085/rest/docker/hello
  • Will give u output : Hello ABC, which you wrote in the class.
  • Summary:
  • Go to docker website, download docker software/docker engine. Everytime u create an application, u need to create a docker img, and to create docker img, u need to create Dockerfile.
  • So using Dockerfile, u need to create a docker img. After u build docker img, u run that particular img using docker run cmd.
  • So docker creates a partition, it creates a container and it runs our application inside that container.

  • https://www.youtube.com/watch?v=lcQfQRDAMpQ
  • Main purpose of Docker:
  • To move out of  the traditional problem which is code works fine locally but not in other envt, or works in dev, but not in QA ,etc. 
  • Docker resolves this problem and makes it uniform across all platforms or envts.
  • Another problem which Docker resolves is that the architecture of VMs above Host OS. In this case, the applications utilize the memory and CPUs allocated to their respective VM where its hosted. This can result in over utilization or under utilization of resources. With Docker, its a light weight container.

  • Courtesy: https://www.youtube.com/watch?v=2lU9zdrs9bM  /  Edureka
  • FROM command: It defines the base image to use to start the build process
  • Syntax: FROM [image name]
  • Example: FROM openjdk:8   /   FROM ubuntu  (but this works only if you have ubuntu OS)

  • RUN command: It takes a command as its argument and runs it to form the image. Unlike CMD, it is actually used to build the image.
  • Syntax: RUN [command]
  • Example: RUN  apt-get install -y riak

  • CMD command: Similar to RUN, but you cannot use CMD command to build a docker image
  • Syntax: CMD application "argument", "argument", ...
  • Example: CMD "echo" "Hello"

  • ENTRYPOINT command: It overrides what CMD command does. This command suggests that when you finish building ur docker image, then the cmd which is specified with ENTRYPOINT that will be executed first when you run the docker container of that particular image.
  • Syntax: ENTRYPOINT application "argument", "argument", ...
  • Arguments are optional here.
  • Example: ENTRYPOINT echo

  • ADD : Used to copy files from your source/host to the container (container's file system at the destination path)
  • Syntax: ADD [source dir or URL] [dest dir]
  • Example: ADD /my_app_folder /my_app_folder

  • ENV : You can specify to your docker container that my application needs certain envt variables, and this envt var is present here.
  • Synatx: ENV [key] [value]
  • Example: ENV ABC 4

  • WORKDIR: In your docker container, many times you would want to go to a particular container and then start execution inside that container
  • Example: WORKDIR /path WORKDIR ~/

  • EXPOSE: With expose cmd, used to associate a specified port number to enable networking between running process inside the container and outside world (host)
  • It tells that this application will be active on this particular port number inside the container
  • Syntax: EXPOSE [port]
  • Example: EXPOSE 8080

  • MAINTAINER: Allows u to set the author name, i.e setting the name of yours that you created the docker image. It should come after FROM cmd. So one who uploads or downloads your image from docker hub will know that this person has created the docker image
  • Syntax: MAINTAINER [name]
  • Example: MAINTAINER author_name

  • USER: If you want a particular user to run the container,u can use this command. It basically sets the UID (or username) who is to run the container based on the image being built.
  • Syntax: USER [UID]
  • Example: USER 791

  • VOLUME: Used to enable access from your container to a directory on the host machine (i.e mounting it). It's used to set a custom path where ur container will store all your files. This is the place where all the files related to your docker container will be present. This path can be shared by multiple containers.
  • Syntax: VOLUME ["/dir_1", "dir_2"...]
  • Example: VOLUME ["/my_files"]
  • So, if u have multiple container which r hosting the same application, then u might want them to use the same path where its stored. So this is the path where its present.

  • Creating an image to install apache web server
  • Here is the Dockerfile to install Apache:
  • FROM ubuntu:12.04

MAINTAINER edureka

RUN apt-get update && apt-get install -y apache2 && apt-get clean && rm -rf /var/lib/apt/lists/*

ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2

EXPOSE 80

CMD ["/usr/sbin/apache2", "-D", "FOREGROUND"]






  • Save the above content to a Dockerfile on your machine
  • Go to the location/directory where this file is placed and run the following command to create a docker image:
  • docker build -t myapachedockerimage .
  • Give command: docker images
  • This is to see whether your image is shown in the docker images list
  • Now, you need to create a docker container out of this image. To create docker container:
  • docker run -p 80:80 --name=App1 myapachedockerimage
  • Inside the docker file, we have specified that our application will be active on port 80, and if you want to acces this application on my host machine, then I need to do the port mapping. Port mapping of my host port TO my container port. In the command above, first comes the host port and after colon, the container port
  • --name allows u to give name to your container
  • Now, go to browser : http://localhost:80/

  • You service is running now in that terminal. You can verify it in another terminal.
  • Open another terminal, you can type:  docker ps
  • It will show you the running application details - showing that your application is containerized and running these many seconds ago. Following details will be shown: container id, image name, 50 seconds ago, container name, port no, etc.
  • Now, to stop running the service, either in the old terminal, press Ctrl+C OR in other terminal, hit the command: docker stop <container-id>

  • Courtesy: https://www.youtube.com/watch?v=LQjaJINkQXY
  • To build an image from scratch (i.e not referencing any existing image), we have the option of specifying the following in Dockerfile:
  • FROM scratch
  • If you go to docker hub, there is an empty image already created called 'scratch'.
  • Whatever you specify with CMD in Dockerfile, will be executed only when you create the docker container (i.e docker run)
  • Whatever u specify with RUN in Dockerfile, will be executed at build the image (i.e docker build)
  • Help related to Dockerfile: https://docs.docker.com/engine/reference/builder/
  • https://github.com/wsargent/docker-cheat-sheet

  • https://www.youtube.com/watch?v=QBOcKdh-fwQ
  • To pull the image from docker hub, get the image name and run:
  • docker pull ubuntu
  • Assuming that you pulled 'ubuntu' image from docker hub.
  • To verify u pulled successfully, you can run : docker images
  • Images are basically templates to create docker containers. They are the files which have the information about what will be required by the container.
  • Whereas, container is a running instance of an image.
  • So when u run an image, a container is created.
  • Images r stored in registries (ex: docker hub) - can be local or remote locations
  • Other options:
  • docker images -f "dangling=false"
  • Here, -f means filter
  • A dangling image is one that is not tagged and is not referenced by any container. Above command will return 1-2 results as they are not associated to any running container.
  • To run the image (i.e to create the container) in interactive mode, u can give command:
  • docker run --name MyUbuntu1 -it ubuntu bash
  • MyUbuntu is the container name, ubuntu is the image name, -it stands for interactive mode
  • On hitting above command, it will be opening like nohup (in background), and will enter into the container. Just hit 'ls' command, u will see the list of available directories/files inside the cotnainer.
  • If you open a new terminal and hit docker ps, it will show the container as running.
  • If you want all the detailed information about the image, you can hit 
  • docker inspect <img_name>
  • It will show u the json with all details, you can inspect it for viewing all micro level details.
  • To remove the image, just type:
  • docker rmi <img_name>
  • Then type: docker images
  • You will find that the image is deleted now.
  • If its running and used by a container, it will give an error msg saying its in use. So first you need to stop the container using: docker stop containerid
  • You can also remove the image forcefully using: docker rmi -f containerid (example: docker rmi -f ubuntu:18.04
  • Then run docker images now to verify.

  • Courtesy: https://www.youtube.com/watch?v=Rv3DAJbDrS0
  • Docker components architecture:
  • When you fire docker pull command from docker client, it pulls the image from docker registry and downloads/saves the image on your docker host.
  • When u fire docker run, it chks for that img on ur local system, if img is avl, it will run that img and create a container from that img. If img is not avl, it will go to the registry, pull the img and then create a container
  • docker build is used to create a docker img using docker file and then create a container.

  • So, that's why we directly fire docker run (instead of docker pull), as docker run will first chk for the img on ur local docker host, if not avl ,it will download it from registry and run the cotnainer
  • To list out all the containers on ur local system: 
  • docker ps -a
  • It will list out all the containers on your system.
  • We also have start/stop cmds to start/stop the container
  • docker start containerid/containername
  • docker stop containerid/containername
  • Along with that, we can also pause and unpause the container:
  • docker pause containerid/containername
  • docker unpause containerid/containername
  • So, if you have two terminals - one terminal in which the container is running and in another terminal if u issue 'docker pause containerid' command, then the 1st terminal in which ur container is running will be paused entirely. If you even type anything in the 1st terminal, it won't take the commands.
  • But when u unpause the container using 'docker unpause containerid', the commands u issue will show up/typed automatically in the window (which had been paused previously).
  • Other commands are like:
  • docker top containerid/name
  • It will show the top processes of the container
  • docker stats containerid/name
  • It will show the stats of the container id (like memory usage, cpu usage)
  • You also have:
  • docker attach containerid/name
  • So if you fire this attach command in a terminal (where you have not kept the container running),  then it will attach the containerid (container) in that terminal. That is, you will enter the container, and you will see the directories present in the container.
  • docker kill containerid/name
  • This will terminate/kill the container if its running, if container is not running - it will give a msg like container is not running. So it will kill only if container is running
  • docker rm containerid/name
  • This will remove/delete the container
  • docker history imagename
  • This will show the history of the image.

  • Courtesy: https://www.youtube.com/watch?v=HqBMEmoAd1M
  • docker version
  • It will give you details like docker client version, API version, Go version, last git commit version, docker server version, last built datetime, OS/Arch.
  • docker -v
  • docker --version
  • These 2 cmds will give the same output like: Docker version 18.03.1-ce
  • docker info
  • This info cmd will give detailed information about the docker you have installed on ur machine like total containers, images, server version, storage driver, kernel info, cpu/memory usage, etc
  • docker --help
  • This command will provide u help will all commands related to docker. 
  • You can use like: docker images --help (if you want to get help on docker images cmds)
  • OR for docker run related queries, you can type docker run --help
  • docker login
  • You can use this cmd to login to your hub.docker.com with your credentials
  • docker system df
  • To check the disk usage of docker, you can run this command. It will show the size taken by the images, containers, local volumes, cache

  • docker system prune
  • This cmd will remove all stopped container, all dangling images and all build cache. So be careful while using this command. It will ask for a confirmation after u fire this cmd like 'Are you sure you want to continue Y/N'
  • Dangling images means those images which are not associated with a running container.
  • You can also check with docker system prune --help for limited removal or other help.
  • So, if you give cmd docker system prune -a (i.e -all), then it will remove all stopped containers, and also all images without atleast one container associated to them, and all build cache.

  • Docker Compose
  • Docker Volume
  • Docker Swarm
  • Docker Networking
  • Orchestration
  • Docker security

  • Courtesy : UDEMY:
  • Docker 19.03 - latest version released this summer 2019
  • The docker logo having a whale carrying containers is known as the Moby Dock
  • The mascot for docker (turtle) is called the gordon - the turtle
  • Installation: Two ways - You can either install Docker CE (Community edition) or EE (Enterprise Edition)
  • store.docker.com - will ideally take u to hub.docker.com
  • There are three major types of installs : Direct, Mac/Windows , Cloud
  • On Mac/Windows, its not a direct install of docker, but you get a suite of it to install. Because on Mac/Win, a small VM has to be started in background to run the docker container. To install docker on windows, u atleast need Windows 10 Pro. To install on Win 7,8 or 10 home edition, you will need to install docker toolbox
  • On Linux, its supported rightly so no VM will be running.
  • On Cloud, it can be AWS/Azure/Google. So the docker varient will vary per cloud that you select. For AWS, it will give you a format/template of CloudFormation script for you.

  • Now, Kernel is the software at the core of an operating system, with complete control. Example - Kernel manages memory allocation, cpu utilization, etc. And CPU is the core circuitry which executes program instructions. Docker runs on top of the original machine's kernel - making it the host machine.
  • The system in place that handles docker services is called the docker engine. The engine consists of server process to run docker features, API, and CLI.
  • The server is also called Docker Daemon - daemon is the background process running on OS.
  • It runs within the kernel of the host machine. It listens for requests from the user.
  • User sends requests via CLI which docker gives; and once docker daemon receives the request, it starts to create containers.
  • Docker container envts: 
  • The processes of one container cannot affect the processes of another
  • A container has limits on resource usage like the CPU and memory.
  • Application specific code - without installing dependencies on the host machine. A container gets its whole own file system so all app-specific code will exist within that file system including all its libraries and dependencies.
  • So, a docker container is a loosely isolated envt running within a host machine's kernel that allows u to run app-specific code. 
  • So a docker engine is running on a host machine, it creates it's envts called containers where we can have a certain degree of process isolation, its own file system, resrc limitation and app-specific code and libraries in its container.
  • Why are containers useful and different from VMs:
  • Portability to multiple OS envt. Means if u r running ur app on linux, windows, mac or cloud, docker will ensure ur app runs identical on all envts.
  • Any machine with docker installed on it, can take a container and run it. So less time setting up envt and focus on more coding.
  • You can run containers on local envt, and also for CI/CD.
  • Containers are very light-weight compared to VMs
  • Example: If u need to run a 2 GB image of software on a VM. And u need to run it on 5 VMs running on the same host machine. Then u will need 10 GB of storage on the host machine.
  • With docker container, first u will need 2 GB of storage for the 1st container to run, but then share a lot of those resources to run any container in the future. The file system between each container is shared. Common files are reused and dont have to be re-downloaded. So this way u can start 100s or 1000s of containers without overloading ur host machine's disk space. 
  • Docker images:
  • How are docker containers created ? - They are created by the objects in docker called images.
  • Docker images are read-only templates with instructions for creating a docker container
  • These instructions define everything that a container needs - what container will need to run, its specific software, libraries, envt vars, config files and more.
  • So when u run the image (which is an executable package), u create a docker container.
  • So the container is the result of executing those instructions written in the docker image.
  • Image to container relationship:
  • Docker image is the assembled set of instructions, which tells how container shud be created.
  • Container is the result of running that img.
  • Docker reads and executions the instructions written in the image in order to create a container.
  • Docker image is like a class - from which u can create multiple objects (instances) of a container.
  • Ubuntu image search and running its container on ur host
  • With docker search command, u can search for 'ubuntu' image on dockerhub
    • docker search ubuntu
  • You will see all related images of ubuntu along with desc column. It is similar to what you search in ur hub.docker.com.
  • Now, we will pick the 1st image in the list with max starts; as we want to create an ubuntu container with this image 'ubuntu'.
    • docker create --name=foo -it ubuntu bash
  • So, docker pull the ubuntu img from dockerhub (after finding it's not present locally on the machine); downloads it; and then it creates a container with name 'foo'.
  • Output of above command gives a unique id which is the container id.
  • Now if u do:  docker container ls
  • Then ur container does not appear in the list. Why ?
  • Bcoz docker container ls command only lists those containers which we have already started and are running.
  • We created the container with above cmd but it doesn't mean that it's running yet.
  • It just means that we have created 

  • You cannot remove/delete a running container with - docker rm container_name. First you need to stop it using "docker stop container_name". Then docker rm container_name.
  • docker container ls -a  --> to list all containers (even non-running containers are shown)
  • But if you have removed/deleted the container using docker rm, then 'docker container ls -a' will not populate the name of container.

  • Courtesy : SKILLSOFT:
  • Docker community edition (CE):
  • It is free and the docker CE version you download will have name like v17.06.0-ce - which means version includes day.month 17th date and 6th month
  • If it's v17.06.1-ce - means there is some patch applied on top of the 0th version and it has some security features/patches and bug fixes added.
  • Edge releases - released every month, and also called beta releases. But can still contain bugs. So stability is not guaranteed.
  • Stable releases - most enterprises use this in production. Made avl quarterly,
  • Docker enterprise edition (EE):
  • Gives more features.
  • Apps created in docker CE will run fine in docker EE.
  • Docker EE has 3 tiers - Basic, Standard, Advanced.
    • Basic - provides containers, plugin support like git, hub.docker.com , but does not support Role-Based Access Control (RBAC), LDAP authentication, Docker DataCenter or image security and vulneribility scanning
    • Standard - provides multi-tenancy, private container repositories, Docker datacenter, Image signing, LDAP and RBAC ; but does not support image security and vulneribility scanning. 
    • Advanced - provides Docker datacenter with:  web admin UI, deploy docker compose file, deploy apps, rolling app updates, RBAC. Also support image security and vulnerability scanning.
  • Docker with Windows 10:
  • Required: Windows 10 64-bit , with windows version update - atleast 14393.222
  • Before installing docker on ur windows, ensure u:
    • Enable hardware virtualization 
    • Enable hyper-v
    • Apply latest software updates
  • Once u install docker on windows 10, it supports both windows and linux containers.
  • Types of windows containers:
    • Windows server containers - they use the underlying host OS which makes them very quick to start
    • Hyper-v containers - its a lightweight virtual machine optimized to run for the purpose of supporting both windows or linux containers.These containers provide more isolation and security than windows server containers bcoz they do not depend on the underlying OS and are embedded in the container.
  • There is no difference in building images for the above two types of containers. Also the same docker CLI manages both types of containers
  • Hyper-V containers are managed by Docker
  • Hyper-V VM is created from a docker base image only. The container exists inside the VM.
  • What is Edge vs Stable
  • Edge means beta version - released every monthly, whereas stable is released quarterly
  • Edge gets new features first, but only supported for a month

  • Two types of containers: Windows containers, linux containers
  • Linux containers are still default
  • Windows containers - supported only with Win 10 Enterprise Pro
  • For Win 7,8, 10 Home edition - u need to use docker toolbox. Download docker toolbox, run full install.
  • One more requirement with docker toolbox is that when you git clone the code, you need to place the code inside C:/Users/id/code

  • Web server with docker (for demo purpose) - For it's easy to use purpose and configuration we are using Nginx web server.
  • So the following docker command 'docker run --publish 80:80 nginx' will:
    • Download nginx image from docker hub
    • Start a new container from that img
    • Open port 80 on the host IP
    • Routes that traffic to the container IP, port 80
  • To see the list of running containers: docker container ls
  • To see entire list of containers: docker container ls -a
  • Old way is : docker ps
  • docker container run - always starts a new container
  • docker container start - to start an existing stopped one
  • To view container logs:
  • old way: docker logs
  • new way: docker container logs - show logs for a specific container (ex: docker container logs containername/containerid)
  • U also have the option of not specifying all digits of container id - even if you type: docker container logs 8ah2 and hit enter, then docker will find unique container id starting with 8ah2 and determine the containerid u wanted to specify.
  • Remove container:
  • old way: docker rm
  • new way: docker container rm
  • docker container run - ideally what it does in the background:
    • Looks for the image locally in image cache, doesnt find anything
    • then looks in remote image repository (ex: docker hub)
    • Downloads the latest version  (ex: nginx:latest) and stores it in image cache
    • Creates new container based on the image and prepares to start
    • Gives it a virtual IP on a private network inside the docker engine
    • Open up post 80 on host and forwards to port 80 in container (when we had given --publish 80:80)
    • Starts container by using the CMD in the image Dockerfile
  • Docker container -we cannot ideally compare it to a VM; it is just a process running on an OS (which u can check with 'ps aux' command)
  • --detach (short form -d) to detach a container. By running in detached mode, we are able to have access to our command line when the container spins up and runs. Without it, we would have logs constantly fed onto the screen
  • --env (short form -e) to pass any envt variable to a container while running the docker run cmd
  • Just a question like:
  • Would the following two commands create a port conflict error with each other? 
    docker container run -p 80:80 -d nginx
    docker container run -p 8080:80 -d nginx
  • Answer is NO, just because the containers are both listening on port 80 inside (the right number), there is no conflict because on the host they are published on 80, and 8080 separately (the left number).
  • I ran 'docker container run -p 80:80 nginx' and my command line is gone and everything looks frozen. Why?
  •  Answer: You didn't specify the -d flag to detach it in the background, and there aren't any logs coming across the screen because you haven't connected to the published port yet, so Nginx has nothing to log about.
  • docker container top - shows the process list in a container when we specify containername/containerid
  • docker container inspect - details of a container configuration
  • docker container stats - shows live streaming view of performance statistics of all containers. But if we specify containerid at the end, it will give u container specific stats info
  • docker container run -it : To start the container interactively
  • docker container exec -it : Does the same thing as run command, but it does it to an existing container; to run additional command in existing container

  • Docker Networks
  • docker container port <container>
  • This above command gives u a list of ports that r open in your container.
  • When you start a container, u r actually in the background connecting to a particular docker network. And by default that is the 'bridge' netwk
  • And each such netwk would route out via a NAT firewall, which is actaully the docker daemon configuring the host ip on its interface, so ur containers can get out to the internet or the rest of the netwk and get back.
  • But we dont need -p in the command, when we want specific containers to talk to each other inside our host.
  • For ex: If u have an application having mysql container and a php/apache container, then those 2 containers must be on the same netwk and they shud be able to talk to each other without opening their ports to rest of ur physical netwk
  • And for ex: If u had another app (using mongo and nodejs containers) which is not related to previous app, u can create a netrk for that, so they can talk to each other without using -p to expose them to the netwk, but they wouldn't talk to other netwk.
  • On any interface (ethernet for example) on your host, u cannot listen on more than one port for multiple containers. That is , u cannot have two containers listening on port 80 at the host level. Only one container can do that. If u start another container with same port, it will error out saying there is something else on that port. This is just a normal networking limitation not a docker limitation
  • So, u can create multiple VPNs based on the applications u want to run, based on ur security requirement.
  • U can have multiple networks connected to one container OR u can have a container talking to no networks. So the point is, a container can be present in more than one netwk or in a single netwk based on the requirement.
  • You can skip virtual netwks and use host IP (--net=host)
  • Another point to note is that the IP address of ur host and container is not the same.
  • So using -p , u r exposing the port to the physical network

  • To see the list of docker networks - docker network ls
  • To inspect or details about a netwk - docker network inspect (ex: docker network inspect bridge) - it will show u the list of containers attached to this network. (example: webhost container). You will also see its IP address.

  • To create a netwk with optional driver specifying; for using built in or 3rd party drivers to create a new virtual netwk - docker network create --driver
  • Connect/Disconnect cmds to change a live running container so a new NIC is created for that container - docker network connect   /   docker network disconnect
  • Now, when u do : docker network ls, it will give 3 networks:
    • bridge
    • host
    • none
  • bridge - its the default docker virtual netwk that bridges thru the NAT firewall to the physical netwk that ur host is connected to. By default, all ur containers wud have attached to that netwk and wud have worked
  • Now, here suppose we have an nginx container running, and if u fire 'docker network inspect bridge', then u will see a json output which will contain a list of containers attached to that netwk (bridge). It will also show the IP address of the container. U will see that these netwks automatically assign IP addresses
  • host - its a special network that skips the virtual netwk of docker and attaches a container directly to the host interface. It can improve performance with high throughput, but at the sametime can be insecure sometimes - depends on customer requirement.
  • none - its equivalent to having an interface on ur computer that's not attached to anything but we can create our own. That is it removes eth0 and only leaves u with localhost interface in container

  • Now, if u create a new netwk using for ex: docker network create my_app_net
  • It creates a new netwk with default driver 'bridge' as its the default driver. Its a simple driver that creates a virtual netwk locally with its own subnet

  • You can use docker network --help to get help on the netwk commands.
  • Example of docker network connect command
  • Example: docker network connect 01ce8cb841002 c82f4g33h4
  • 1st is the existing netwk we created first and connect it to my new netwk (2nd value above)
  • bascially this cmd creates a NIC in a container on an existing virtual netwk
  • So ur container is on two networks now (bridge and my_app_net), U can chk with docker network inspect.
  • If u want to disconnect that : docker network disconnect 01ce8cb841002 c82f4g33h4
  • And then do docker network inspect. So it will be back with just one netwk (bridge)
  • Recommended:
  • Create ur apps so backend/frontend sits on the same docker netwk (so u r able to protect them bcoz in the physical world where we create VM , we often over expose the ports and netwks on our application servers. Here u r going to expose the ports only on ur host that u specifically use with -p 
  • The apps' intercommunication never leaves the host
  • All externally exposed ports closed by default
  • U must manually expose via -p which is better default security
  • So everything is better protected with

  • Docker container intercommunication with DNS:
  • As containers launch, stop, relaunch, connect to a netwk, disconnect from a netwk, connect/disconnect to one or the other netwk, u cannot rely on the container IP addresses for communication. So its better to rely on the DNS assigned to containers.
  • Docker uses container names as equivalent to the host name for containers talking to each other
  • That is - docker daemon has a built in DNS server that containers use by default.
  • Docker defaults the host name to the container name, but u can also set aliases.
  • Now for example, we created a new netwk my_net_app. It has one container 'new_nginx' on it. Now, as I created this new netwk (which is not a default 'bridge' netwk), it gets a special new feature of automatic DNS resolution for all containers on this netwk from all the other containers on this netwk using their container names. If I create a 2nd container on this netwk (my_net_app), they will be able to find each other regardless of their IP addresses with their container names.
  • Using following command, u run a new container from 'nginx' image:
  • docker container run -d --name my_nginx --network my_app_net nginx
  • So, now if u inspect that netwk, u will see two containers in the list - my_nginx , new_nginx
  • Now, run:
  • docker container exec -it my_nginx ping new_nginx
  • It will give u the ping output reply of bytes received. 
  • So, the DNS resolution just worked.
  • So u just need a  configuration file with a container for specifying which containers to talk to
  • The reverse way also works, i.e trying to ping other container from the later one:
  • docker container exec -it new_nginx ping my_nginx
  • It will also give u the ping output reply of bytes received.

  • The default netwk (bridge) does not have DNS resolution facility provided by default.
  • But u can do it using : docker network --link list command. which actually adds a link to another container. But its better always to create a new netwk instead.
  • So summary:
  • Containers shud not really rely on IP Addresses for inter communication
  • DNS is user friendly and built in if u use custom netwks
  • Question: If you wanted multiple containers to be able to communicate with each other on the same docker host, which network driver would you use?
  • Answer: Bridge - the default bridge network driver allow containers to communicate with each other when running on the same docker host.

  • Now, in Docker engine 1.11 and later, if we create a custom netwk, we can assign an alias, so that multiple containers can respond to the same DNS name. That is, we can have multiple containers on a created netwk respond to the same DNS names.
  • To do that, u can create a new virtual netwk. Then create two containers from (for example) 'elasticsearch:2' image.
  • Use --network-alias search command. This is an option with docker run command. So when u start a container, u are going to tell it that I want another DNS alias to call this container by not just its container name. That is, use --network-alias search command while creating container to give them an additional DNS name to respond to.
  • This way u can just keep adding aliases to ur containers while creating them. And u will find out that they respond just like what DNS round robin does.
  • So once u have started the container up, u can use the alpine image to do an nslookup, that will return u the list of DNS addresses for the name 'search':
  • alpine nslookup search
  • Run alping nslookup search with --net to see the two containers list for the same DNS name
  • After u have got both containers resolving to the same name 'search', u can use the curl cmd to get a quick output of the elasticsearch server (elasticsearch runs on port 9200)
  • Run centos curl -s search:9200 with --net multiple times until u see both "name" fields show.
  • Demo of above explanation:
  • Create a new custom netwk
  • docker network create dude
  • Attaching the container to netwk dude to find them with dns name 'search'.
  • NOTE:  --net-alias   OR  --network-alias both work 
  • docker container run -d --net dude --net-alias search elasticsearch:2
  • Now, as I am not specifying the name, I can hit the same cmd again, and see the output with another id.
  • Then fire: docker container ls
  • It will list both of the containers. 
  • I have not opened any ports, as I need to test inside this virtual netwk
  • Now, we need to test that with same DNS, we reach to both containers.
  • docker container run --rm --net dude alpine nslookup 
  • This will run the nslookup cmd on the search dns entry and then exit. And then it will clean itself up with --rm.
  • docker container run --rm --net dude centos curl -s search:9200
  • This will give me back the elasticsearch server name 'interlooper'. If I run the same cmd again, u may get another server name 'Mr.N'. So if u keep running the command, randomly it can give either of these names which are behind the cluster 'elasticsearch'.

  • Docker Images
  • Docker images (official or public images) are available at hub.docker.com.
  • You need to sign in with your account created on hub.docker.com
  • Search for image - 'nginx'. You will find more than 15K search results, but the 1st result will only be the one with only name 'nginx' in it and labelled as official below.
  • This is the only official image with more than 10 million pulls, and docker inc makes sure that official image only has the exact name 'nginx' or whatever exact name it has. Only public image u will find will have either the account name / organization name / repository name appended in front and then the 'nginx' name appended at the last. So automatically the only 'nginx' image name will be populated first and then the other public images.
  • Great things about official images is their up to date documentation and quality work which is ensured by docker inc before marking the img as official.
  • Docker official images will mostly have versions, the public images might not always have versions.
  • Also official images will be having tags (ex: 'latest') associated with their name.
  • Normally, when u fire cmd 'docker pull nginx' it will pull the img of nginx which is tagged as 'latest'. So this way, the person who downloads the img if does not care of specifying the img version, he will ultimately download the latest version only.
  • If you type 'docker pull nginx:1.11.9' , it will pull the specified version of that img. But it will check if the version u previously downloaded 'docker pull nginx' (i.e latest img), and if 1.11.9 is the latest version only, it will not do anything as it's the exact same version previously downloaded.
  • So the tag names (latest, alpine, mainline) given to versions (1.11.9 , 1.11, etc) helps the user to download automatically the latest version or specify tag name while downloading instead of remembering the exact version name.
  • In hub.docker.com, u can click 'Explore' link, and see the list of all official images, and also their github link to the source code.

  • Docker Image Layers:
  • When u download the images (sometimes, the same images/new versions) again and again, u will see layers of it getting created.
  • docker history nginx:latest - this cmd is an old way to see the layers of changes made to img
  • U will find that this is not the list of changes that have been done in the container.Its a history of the img layers. Every img starts from the very beginning with a blank layer known as 'scratch'. And then all the set of changes that happens after that on the file system on the img is another layer. And u might have one or dozens of layers
  • Above snapshot shows that the last layer change is having a lot of files pushed into it - as size from 0 bytes to 123 MB is grown.
  • Now, when u r creating img, u r starting with one layer. So every time an update or new versions gets written on it, the system can identify whether its an existing or new img.
  • Now, separate files won't be created when we are re-using the layers. Explanation below:
  • First u create a custom image/layer, and then u start writing a docker file, configure apache, open port 8080, and then u copy source code of a website.
  • Now, u copy source code of two different websites and every line of the docker file is the same except the last line where u copy source code of website B. So u would end up with 2 images highlighted in the snapshot above.
  • But the only files that r actually stored are: port, apache, custom , copy A and copy B. So not 8 diff files are stored but only 5 files r stored. We are never storing the entire stack of img layers more than once if its the same layers

  • Container Layer
  • We have an apache img, we need to run a container out of it.
  • What docker does is it creates a new Read-Write layer for that container on top of that apache image.
  • The base img will be read only. So when u r running containers and changing files that r coming thru the img - for ex: as per shown in img when u start container 3, and u actually change a file that is in the apache img, its known as 'copy on write'. The file system will take that file out of the image and copy it (with differences) into the container layer.
  • So now, container is just a running process and the files are different than they were in the apache image.
  • The 'missingFile' name u see in the list of layers when u run the command - is fine. Because they are actually not missing files but they are not actual images, they are just layers so this kind of name appears.
  • Images are basically made of two parts - 1) binary content/file system changes  2) metadata
  • docker image inspect - this cmd gives u the metadata to analyze.
  • Summary:
  • Each layer is uniquely identified and stored only once on a host (daemon).
  • This saves storage space on host and transfer time on pull/push
  • A container is just a single read/write layer (i.e single layer of change) on top of an existing image.
  • docker image history and inspect commands can help us to know what's going inside an image and how it was made.

  • Image tagging and pushing to docker hub
  • docker image tag - to assign one or more tags to an image.
  • Technically, imgs dont have a name. Even if u do a docker image ls , u will notice that there is no name column in the output - only repository , tag, image id , created date, size are shown.
  • As in snapshot above, u only see the image id, repository name and tag name, but not the image name.
  • Repository name used here - are the official repositories. Repositories are ideally having organization name slash repo name (ex: udemy/nginx) but as we are using official repository, just nginx is populated here in snapshot.
  • docker pull nginx:latest   
  • docker pull nginx:mainline



  • Above 1st command will actually download the latest version (official 'nginx' image tagged with tag name 'latest' in docker hub)
  • But when u fire the 2nd cmd, it will show - downloaded newer image . And if u hit 'docker image ls' , it will also show two 'nginx' in the list. But if u notice the image id, for both nginx entries, the image ids will be the same - because docker does not store it twice in the cache, which saves the disk space a lot. These tags are just labels which point to actual image id, and we can have many of them all pointing to the same one.
  • docker image push
  • Above cmd uploads the changed layers to an image registry (default is docker hub)
  • Ex: docker image push bretfisher/nginx  - this pushes bretfisher/nginx image (local img) to docker hub.
  • Output can give u access denied if u r not logged in to docker hub from cmd prompt. In order to do it, use login cmd as follows:
  • docker login <server-url>
  • This defaults to docker hub server, but u can override it by specifying server url
  • Otherwise, docker login will work.
  • It will ask for username, pwd. After successful login, u r done login.
  • When u login like this with ur credentials using docker CLI, it will store ur profile in .docker/config.json file with an encryption key.
  • So, if u r performing this from a machine which u dont trust, after performing the commands, just fire docker logout
  • So, u r logged in now, u can push the image to docker hub using docker image push bretfisher/nginx
  • U can refresh at docker hub and u can see ur image created there.
  • This will create img with default tag 'latest'.
  • If u want to add an additional tag to the same image, u can use:
  • Syntax: docker image tag source_or_existing_tag  new_tag
  • docker image tag bretfisher/nginx bretfisher/nginx:testing
  • Then fire: docker image ls
  • U will notice that all these images (bretfisher/nginx) have different tags but same image id.
  • U can push this 'testing' tag to docker hub by using:
  • docker image push bretfisher/nginx:testing
  • It will push this tag too to the docker hub.U will notice in the logs which show 'Layer already exists'. So when we r downloading n uploading images, the layers already exists.

  • Dockerfile
  • Clone bretfisher's repository (referred in starting sessions) and see the default sample docker file under dockerfile-sample-1 directory. Inside this dir, there will be only 1 file - Dockerfile
  • By default, u will always have the name 'Dockerfile' starting with uppercase 'D' and 'f'small, but if u want to change that - u can use the command: docker build -f <docker_file_name>

  • Explanation of each line in Dockerfile:
  • First is the 'FROM' cmd.
  • FROM debian:jessie
  • This is the first command basically needed to save ur time and effort. These minimum distributions are much smaller than the CDs/Pen drives you would use the install the virtual machines from them. For example, to install ubuntu on a VM, it would need a lot of setup time and effort. These distributions are all official images, they will be always up with the latest security patches. And one of the main benefits of using them in containers is - to use their package distribution systems to install whatever software u need in ur package
  • Package manager (PM) - PMs like apt and yum are the main reasons to build containers FROM debian, ubuntu, Fedora or CentOS.
  • Here, we r using the 'jessie' release in (FROM debian:jessie)
  • Next, is ENV cmd
  • EVN NGINX_VERSION 1.11.10-1~jessie 
  • ENV is to set envt variables which r very imp in containers as thru this, we set keys and values for building and running containers. These envt var work on all OS and configs.
  • All these commands we mention are-  actual layers in our docker image. So FROM is one layer, ENV is another layer and so on. Therefore, the order of them matters
  • RUN command:
  • It actually runs the shell script commands inside the containers while building it. So u normally will see run cmds when u want to install softwares with package repositories OR u want to do file zip/unzip, file edits inside the container itself.
  • Here, as we r using debian, the apt, yum, etc all commands r available (to be set in RUN cmd) which were made avl during releasing these binaries (ubuntu, debian, etc).
  • Example run cmd:
  • RUN apt-key adv --keyserver http://abc.nit.edu:80 --recv-keys 508161361626315 \ 
  • && apt-get update \
  • && apt-get install --no install suggests -y \
  • && rm -rf /var/lib/apt/lists/*
  • Here, the && means we are specifying sequences of commands to run after the other, and do not create separate layers for each command. It will save space.
  • EXPOSE 80 443
  • By default, there are no TCP/UDP ports open inside a container. No ports are exposed from a container to a virtual network unless we list it here in EXPOSE cmd.
  • Now, expose cmd does not mean these ports (80 and 443) will be opened automatically on our host (daemon) - that will be done with -p command, when we fire docker run.
  • Last, we have CMD command.
  • CMD is a required parameter which is the final cmd that will be run everytime u launch a new container from the image OR everytime u restart or stop the container
  • CMD ["nginx", "-g", "daemon off;"]
  • The RUN, ENV and EXPOSE are optional. But FROM and CMD are required. But mostly all these 5 are used in all containers that u r going to create images for.
  • This Dockerfile is having instructions on how to build an image.
  • So, when I build this image, this FROM cmd will actually pull that debian:jessie image from docker hub to my local cache, and then it will execute line by line inside my docker engine and cache each of those layers.
  • So, lets get started:
  • docker image build -t customnginx .
  • Here, u are not referring to docker hub, but creating an image with tag name 'customnginx' locally (not on docker hub), and dot at the end means u r mentioning - build the dockerfile into this directory (where your command prompt is)

  • Notice the hash at the end of each step marked with --> and a unique id. It is actually the hash it keeps in the build cache so that next time we build this thing, if that line is not changed in the docker file , its not going to re-run it . That's why docker makes building and deployment so fast. It's intelligent enough to cache the steps in the build so quite often after u build the image for the first time and if u really changing the custom source code and not changing the application in itself , all installations have already happened so u have very short build times.
  • Building will be completed in a min or two, based on the commands u have kept in dockerfile.
  • Now, open dockerfile ,expose new port 8080
  • EXPOSE 80 443 8080
  • It's just allowing the container to receive packets on these ports.
  • Build it again with same command: docker image build -t customnginx .
  • This took only a couple of sec.
  • Notice that on each step, in logs u will see 'Using cache'. Only on step 5, it recognized that its different so it actually executes that into the container; and after that line whichever line follows, has to be rebuilt. So after step 5, all steps will not be picked from cache but rebuilt again
  • So, u keep those lines in ur dockerfile at the top - which u change the least. And the things that change the most- keep it at the bottom of ur dockerfile.

  • Building images: Extending official images
  • The FROM cmd we already covered:
  • FROM nginx:latest
  • Second is: WORKDIR
  • You can also use RUN cmd to do cd.. or cd (change directory), but the best practice is to use WORKDIR to switch to any working directory.
  • Example: WORKDIR /var/www/nginx/html
  • Third is: COPY source dest
  • COPY index.html index.html
  • This cmd is actually copying src file from ur local machine into your container images. Here, we are picking up our local index.html and overriding the default index.html file in the nginx default directory, so its our custom home page for our web server.
  • Dockerfile:
  • FROM nginx:latest
  • WORKDIR /var/www/nginx/html
  • COPY index.html index.html

  • Here, u will notice that CMD is not there, but CMD is inherited by the FROM cmd where we are referencing the nginx:latest 
  • So this is how images depend on other images and so on.
  • So, b4 we build this, by default what nginx server does is - just run the default nginx default page when we browser localhost:80 in browser - default index.html welcome page.
  • docker container run -p 80:80 --rm nginx
  • So after running above cmd, open browser and see default index.html welcome page.
  • Exit the command. Then run:
  • docker image build -t nginx-html
  • This went fast , as we already had nginx in our image cache and always running was a copying of a small file and change in the working dir
  • Now, lets run the exact cmd we ran before but with the new image name that we built:
  • docker container run -p 80:80 --rm nginx-html
  • Go to browser, refresh and see the new index.html contents displayed now.
  • To put these changes back to docker hub:
  • docker image tag nginx-html/latest bretfisher/nginx-html/latest
  • Now, run docker image ls  - and see the list having multiple entries but referencing the same image id.
  • So, just do docker push and that will push to docker hub.
  • Assignment:
  • Download sample dockerfile-assignment-1 which is a node.js application (webapp) holding a docker file.
  • Read the instructions written by the developer in the Dockerfile and follow them.
  • Edit the docker file - vi Dockerfile, and add the following cmds:
    • FROM node:6-alpine
    • EXPOSE 3000
    • RUN apk add --update tini
    • RUN mkdir -p /usr/src/app
    • WORKDIR /usr/src/app
    • COPY package.json package.json
    • RUN npm install && npm cache clean 
    • COPY . .
    • CMD [ "tini" , "--" , "node" , "./bin/www" ]
  • The && sign tells linux that if u are successful with 1st one then only run the 2nd command written after && sign.
  • You can also write RUN npm install; npm cache clean - but this command will not check if 1st cmd is successful or not, regardless of it , it will run the 2nd cmd.
  • To copy everything - COPY . . - This tells to copy everything from my current dir from localhost to the current dir in the image.
  • Now, build the Dockerfile:
  • docker build -t testnode .
  • Built successfully, run it:
  • docker container run --rm -p 80:3000 testnode
  • It runs on port 80 but listens on 3000.
  • Open browser and check localhost, it should give u welcome message.
  • Now do : docker image ls
  • See the 1st image in the list - repository name  - testnode
  • Before pushing to docker hub, lets rename it to some meaningful name
  • docker tag testnode bretfisher/testing-node
  • docker push bretfisher/testing-node
  • Go to docker hub and refresh.
  • Fire: docker image ls - see the 1st in the list - bretfisher/testing-node
  • To remove this image - docker image rm bretfisher/testing-node
  • Now, if u run : docker container run --rm -p 80:3000 bretfisher/testing-node , then it will actually download it from docker hub, as it was unable to find it locally.
  • You can use 'prune' to clean up docker images, volumes, build cache and containers. Example:
  • docker image prune - to clean up just 'dangling' images
  • docker system prune - cleans up everything
  • docker system prune -a  - cleans up all images u r not using
  • docker system df - to see disk space usage
  • Lastly, realize that if you're using Docker Toolbox, the Linux VM won't auto-shrink. You'll need to delete it and re-create (make sure anything in docker containers or volumes are backed up). You can recreate the toolbox default VM with docker-machine rm default and then docker-machine create

  • Persistent Data: Data Volumes
  • Open docker hub , search for mysql image (official), open the github link of it and see the Dockerfile.
  • As its a database image, mostly it will have a VOLUME command in it.
  • It has VOLUME /var/lib/mysql - this is the default location of mysql databases
  • This image is programmed to tell docker that - when we start a container out of it, it creates a new volume location and assign it to this directory in the container. Means any files put in there in the container will outlive the container until we manually delete the volume. So that's why volumes need manual deletion, we cannot clean them up by just deleting the container.
  • You can use docker volume prune to cleanup unused volumes
  • So, do a docker pull mysql - to download the latest image of mysql
  • Then, docker image inspect mysql - we don't see the docker file, u will see the json because Dockerfile is not a part of the image metadata. In the json, u can see:
    • "Volumes" : {
    •    "/var/lib/mysql" : {}
    • }
  • Now, lets run a container from this image.
  • docker container run -d --name mysql -e MYSQL_ALLOW_EMPTY_PASSWORD=True mysql
  • Here -e is for passing the envt var
  • Hit - docker container ls
  • Here, u will see the mysql running, and then u can type : docker container inspect mysql, u will see the json having same "Volumes" attribute, Along with it u will also see "Mounts":

  • This is actually the running container getting its own unique location on the host to store the data, and then its in the background - mapped or mounted to that location in the container. Location of the container shown in snapshot is - /var/lib/mysql. The data is actually living at the "source" location on the host
  • Do - docker volume ls
  • So, it will show one volume. Also u can type: docker volume inspect to see the details.
  • If u r on a linux machine, u can navigate to the "source" location (in snapshot) to ur hard drive to see if the database is there.
  • For Mac/Windows, its a bit different. It's creating a linux VM so this data is inside the linux VM. So u wont be able to go to that path on ur windows/mac machine and see the data. It's inside that VM. With mounted volumes , we can get around that limitation as well.
  • Even if you stop ur container (ex: mysql) using docker container stop OR even remove the container using docker container rm, then the volumes will be still there. Snapshot below:
<<docker volume output.png>>




  • Now, as u have observed, the volume names are defaulted to a volume id. So if you see the volumes in the output of 'docker volume ls', although u see two volumes for two containers - mysql and mysql2, but both are named with a volume id. This makes it difficult to determine which volume belongs to which container.
  • So we have 'named volumes' - which is a friendly way to assign volumes to containers.
  • We have -v option
  • docker container run -d --name mysql -e MYSQL_ALLOW_EMPTY_PASSWORD=True -v /var/lib/mysql mysql
  • This -v option allows us to specify either a new volume that we want to create while running container OR it allows us for 2 options - 1) create a named volume 2) specify path
  • Above command does the same thing that what the command in our docker file did. 
  • But instead of that, we will put a name in front of it as below:
  • docker container run -d --name mysql -e MYSQL_ALLOW_EMPTY_PASSWORD=True -v mysql-db:/var/lib/mysql mysql
  • This will create a named volume, u can check it by doing : docker volume ls 
  • It will show the name of the volume as mysql-db instead of the long volume id.
  • docker volume create
  • This above cmd is required before 'docker run' to use custom drivers and labels. So its required to run before the docker run is fired.
  • But for most of the cases, its fine if u do it runtime.

  • Persistent data : Bind Mounting
  • Bind mounting maps a host file or dir to a container file or dir
  • At background, its just 2 locations pointing to the same physical location/file on disk
  • It skips UFS, so when u delete container, its not going to wipe out anything on local disk and host files overwrite any in container.
  • Bind mount needs data to be present on the hard drive of the host. U can't specify bind mount in the docker file. U have to use it at runtime (docker container run)
  • Syntax is similar to named volumes. Just instead of vol name, we are giving full path:
  • docker container run -v /Users/Bret/stuff:/path/container  (Mac/Linux)
  • docker container run -v //c/Users/Bret/stuff:/path/container  (Win)
  • So, bind mount always starts with a forward slash as seen in above cmd.
  • Go to dockerfile-sample-2 project dir. It will have index.html and dockerfile.
  • Now, a bind mount 
  • Now, to specify the bind mount while running the container, run:
  • docker container -d --name  nginx -p 80:80 -v ${pwd}:/usr/share/nginx/html nginx
  • This tells that my current working dir (pwd) is going to be mounted into that working dir in the container because my index file is here in this folder - I want it to be in the container so I can edit it here, and it will be seen live in the container
  • So, we will now edit the file on our host and then I will be in the container and see what is happening.
  • Run localhost in browser , see the output - its not the default html output. We used the regular nginx image, not the custom one from this dir. To prove that, run the same cmd again but remove -v this time and run on port 8080
  • docker container -d --name  nginx2 -p 8080:80 nginx
  • Now, run localhost:8080 in browser, see the output - its default html output (default index.html file).
  • So it mapped it correctly.
  • Now Open a new terminal and there run:  docker container exec -it nginx bash
  • So, u r now seeing the container dirs.
  • Go to /usr/share/nginx/html and run: ls -al
  • U will see Dockerfile as well as index.html files
  • Now, in previous terminal (where u r in current dir - dockerfile-sample-2), if u run:  touch testme.txt  , it will create a testme.txt file there. But after that if in terminal 2, if u hit ls -al again, u will see the testme.txt file there also.
  • Also, echo something to the testme file using:  echo "is it me u r looking for" > testme.txt
  • And then in browser:  localhost/testme.txt
  • U will see the line in output - is it me u r looking for  . Because u wrote it in the testme.txt file.
  • So all this happened runtime. Our nginx container is able to see the changes because its a normal file path in the container and those files are on the host. And if I would delete them in the container, it will be deleted from the host because it is the same file.
  • Question: Which type of persistent data allows you to attach an existing directory on your host to a directory inside of a container?  Options : Bind mount , Named volume
  • Answer: Bind mount. This is what is used when you are trying to map the files from a directory on the host into a directory in the container.
  • Question: When making a new volume for a mysql container, where could you look to see where the data path should be located in the container?
  • Answer: Docker Hub

  • Named volumes and bind mounts with real world situations - Assignment:
  • Example: database upgrade - where u r running postgres and u want to upgrade the version may be due to security patches. Normally, u would update the software u do a package mgmt system, upgrade, upgade and it wud handle all the library and dependencies everything by itself. But how we do that in a container
  • So in this example, u will create a container 'postgres' with named volume 'psql-data' using version 9.6.1
  • Use official postgres repository from docker hub, go to the dockerfile for that particular version, learn the volume path, and name the volume. 
  • And once u start ur container, check the logs to see when its finished creating databases and all the start up stuff bcoz when u first time start the db container, typically it does stuff like creating admin user, creating default db, etc. At some point, logs will stop n it will be just running.
  • If all steps u do correctly and then fire 'docker volume ls' u will see the volume created with the name u gave. 
  • Then u stop the container, create new 'postgres' container with a new version 9.6.2 and same named volume; because it needs to be using the same data that u used in the 1st one. And make sure u have the 1st one stopped as both can access the same data at the same time.
  • Once new container is started, check logs and u will see that it only has a couple of lines of startup logs as it does not have to do all the work of the initial startup 
  • Assignment answers:
  • Login to docker hub, search for postgres 9.6.1. Also search for postgres 9.6.1 tags, where u will be able to see the volume paths - would be mostly the same for 9.6.1 and 9.6.2 as it doesnt change too often. Copy that volume path to clipboard
  • Go to cmd prompt:
  • docker container run -d  --name psql -v psql:/var/lib/postgresql/data postgres:9.6.1
  • -d to run it in the background, volume name psql, and paste that volume (path u copied in previous step) where it will store the data, and at last the img name. 
  • Then run: docker container logs -f psql
  • -f is to follow the logs (continuos logs) so u can follow it.
  • Once u see the log with 'Database system is ready to accept connections', press ctrl+c.
  • Then - docker container stop psql
  • Now, hit the same cmd but with diff version of postgres img and diff container name as we cannot have multiple containers with same name:
  • docker container run -d  --name psql2 -v psql:/var/lib/postgresql/data postgres:9.6.2
  • Now, do: 'docker container ps -a', to see two container versions - one stopped and one running
  • Then docker volume ls
  • One entry will be shown with volume id - psql
  • If u do - docker container logs containerid2
  • then u will see very short logs - max 3-4 lines, as all steps were not executed which were already a part of 1st command (1st container run)

  • bind mounts - Assignment:
  • We will be using a Jekyll 'static site generator' to start a local web server.
  • This ex bridges the gap between local file access and apps running in containers. That is - taking host data that u have on ur host machine, and then mounting that into a container, then changing it on the host and watching it being reflected inside a container.
  • In this ex, a web developer has commmon html files on his host machine. So normally he has to download all special tools that he need on his host to start the devt (ex: node.js, mvn, java, envt var,  file watcher that changes things, etc) and bigger challenge for him is to ensure this same envt and same s/w setup and versions are on all other dev,qa,prod envts 
  • With docker hub, this prob has gone away as all stacks of these s/w are made avl as container
  • Source code for this is under - bindmount-sample-1

  • Docker basic fundamentals:
  • Docker containers occupy less disk space than VMs.
  • Multiple docker containers can be run from a single image.
  • A single image can be pulled (but not run) out of multiple docker containers.
  • Containers boot so quickly as they use the host OS.
  • Valid UCP (Universal Control Plane) authenticatin providers - LDAP, AD
  • Type of hypervisor that runs on existing OS -> Type 2
  • Docker Engine composed of two entities-> Docker CLI, Docker Daemon
  • Port used for cluster mgmt- 2377
  • Benefits of application containers - Application isolation, Portability

  • Docker on Windows server containers:
    • They share OS host kernel, so host and container base image must match.
  • Docker on Hyper-V containers:
    • OS does not have to match Docker host OS.
    • Allow linux containers to run on Windows Docker hosts

  • Default container location (when running docker on windows):  C:/ProgramData/docker/containers. 
  • And inside containers folder, you will have folder named with container id for each container.
  • Each container folder has JSON config files.
  • To change this default dir, change it from daemon.json:
  • Go to  C:/ProgramData/docker/config/daemon.json. To store container directories in other location:  Specify in this json file --> 
  • {
  • "graph":"d:\\docker"
  • }


2021 Learning
------------------





Exxample: Running jar on windows and then on linux – u always complain. So tight coupling with OS. In containerization, the jar will run on all OS (loose coupling with OS).

In docker, there is docker daemon – which provides the runtime engine – to create multiple instances of the particular package. Runtime engine plays the role of creating multiple containers.















Other learning through Bret Fisher course on Docker:





Docker network communication:


Windows Containers: Docker Is No Longer Just Linux
So you've maybe heard about "Windows Containers". Something that wasn't even possible till mid-2016 in Windows 10 Pro/Ent and not feasible in 
production with Docker Swarm until April 2017 with a Windows Server 2016 hotfix. They are awesome, but new!

To be clear, this course so far, even on Windows, is running what we've known as "Docker" for 4 years, which are now called "Linux Containers". 

Today Docker is much more than Linux. When you think of images, which are kernel specific, we're now talking about Linux x64, Linux x86 (32-bit), 
Windows 64bit, and a bunch more. This course still largely focuses on Linux x64 images because 90% of the concepts are the same, but 
"Windows Containers" are the new hotness! Technically, they are Native Windows .exe binaries running in Docker containers on a Windows kernel, 
and have no Linux installed.

If you check the course Description, I mention at bottom in "Course Launch Notes" about Windows Containers as a future Section I plan to add.  
When making this course in 2017, I didn't know anyone really using them because Swarm Overlay networking didn't work on Windows, Secrets didn't
 work yet, and there were lots of rough edges that people wouldn't find very useful.

But with the release of 17.09 stable, the story for Windows Containers is much better and I plan on a new section for this. Until then, here's 
some recent getting started videos from Docker and Microsoft:

Windows Containers and Docker 101

Windows and Linux Parity with Docker

Docker + Microsoft - Investing in the Future of your Applications

The Big FAQ
This course has over 5,000 a month taking it! That's amazing to me. It also means we get lots of the same questions. Some are just things I didn't explain clearly. Some are minor issues people hit along the way. Here's the most common Q&A in order of frequency.

NOTE: Don't read all these now, but remember to come back when you hit a issue, This list is the FASTEST way to solve your issues for common course troubles.

I'm using Docker Toolbox, and http://localhost isn't working. What's wrong?
Docker Toolbox uses a default IP of http://192.168.99.100 and doesn't support the localhost feature of Docker for Windows/Mac.

$(pwd) in Windows Is getting an error for bind-mounts: C:\Program Files\Docker Toolbox\docker.exe: invalid reference format.
PowerShell has a few minor differences in command format. This is a PowerShell thing, not a docker thing. When using the shell path shortcut "pwd":

For PowerShell use: ${pwd} 

For cmd.exe, bash, and Quickstart Terminal use: $(pwd) 

PowerShell Tab Completion Isn't Working Or I Can't Find the Page in Docs
The posh-docker repo is no longer being maintained, but a new better one now exists at https://github.com/matt9ucci/DockerCompletion so give that a shot. Then please thank the author if it works for you so they keep it updated. Yay open source!

I'm using Docker Toolbox and bind-mounts aren't working (sharing files between Windows and docker with -v  )
Docker Toolbox requires your files be in your profile under c:\users\<username>\   before file sharing will work in Toolbox.

Bind for 0.0.0.0:80 failed: port is already allocated. -OR- port already in use -OR- permission denied.
This will happen if you are attempting to start a new container with a port that is already in-use on your machine. Remember in TCP/UDP, only one application/service can use a single IP+PORT at a time. This doesn’t change with containers when you use -p  to bind to the host IP+PORT.

First run docker container ls  to check if there are any containers using this port - if there are not; you likely have a non-Docker related application running on your machine that is using this port. Maybe IIS, maybe Apache, etc.

If you are on a Mac, you can check what is using port 80 with the command: lsof -i :80  

If you are on Windows, you can check what is using port 80 with: netstat  

Of course - if you don’t have a reason to specifically use the port that is throwing this error, simply run your container on another port. Remember, the syntax is <host port>:<container port>  , so binding to port 8888 on your host machine with a container that uses port 80, would look like: docker container run -p 8888:80 your_image  

How do I cleanup space (images etc.)?
Run prune commands https://www.udemy.com/docker-mastery/learn/v4/t/lecture/7407918?start=0

Bind Mount Won't Show Up In Container
This is usually a Docker for Windows issue, where you need to go into Docker Settings GUI (lower right icon) and uncheck the drive where your code is, then save, and then re-check that drive to re-apply the SMB file sharing permissions between the Linux VM and the Windows OS.

Starting container process caused "exec: \"ping\": executable file not found in $PATH": unknown
That error is telling you that ping is not available in the image you’re trying to run it from. Official images have changed over time and the official nginx default image (nginx:latest) no longer has ping in it by default.  Image nginx:alpine should still have ping installed (a few of my videos show utilities like ping that are no longer in those images).

If it's a debian-based image (the default nginx) then you can also use apt-get update && apt-get install -y iputils-ping   inside the container to install it.

Lastly, I keep a “bunch of troubleshooting and handy admin utilities” in an image here that you can run ping from: bretfisher/netshoot  

https://www.udemy.com/docker-mastery/learn/v4/questions/3751216

Starting mysql container and running ps causes "ps: command not found"
Like above, this is the container shell telling you the binary "ps" isn't in your path, and not installed in the container. Docker changed the mysql image after the video was recorded and removed the ps utility. You can add it back in using the apt package manager.

apt-get update && apt-get install procps

For more info: https://stackoverflow.com/questions/26982274/ps-command-doesnt-work-in-docker-container

How to run two container websites on a single port in Docker or Swarm services
This is a bit more advanced, but common for production Swarms. You'll need a "reverse proxy"

https://www.udemy.com/docker-mastery/learn/v4/questions/3931678

Error response from daemon: pull access denied.
Double and triple-check the spelling of the image you are pulling; if you are attempting to pull a publicly hosted image - this error will not occur, but if there is a typo and Docker can’t find the image - it will expect that it is a private image and ask you to login.

Also, there are times when the config.json file gets messed up, so try docker logout && docker login. If all that still causes the same issue, try removing ~/.docker/config.json  and then pull again.

Kubernetes vs. Swarm.
https://www.udemy.com/docker-mastery/learn/v4/questions/3446126

Does this help with Docker Certified Associate?
Yes, but it’s not a study guide. Here’s the Lecture with info: https://www.udemy.com/docker-mastery/learn/v4/t/lecture/9485678?start=0

Ubuntu Container vs. Ubuntu OS, What's the Difference?
https://www.udemy.com/docker-mastery/learn/v4/questions/5390204

How to use volumes in Swarm for databases.
https://www.udemy.com/docker-mastery/learn/v4/questions/2675184

How do we do backups in docker?
https://www.udemy.com/docker-mastery/learn/v4/questions/2756448

Getting a shell in VM’s that run Docker
Workaround: https://www.udemy.com/docker-mastery/learn/v4/questions/3860412

docker run -it --rm --privileged --pid=host justincormack/nsenter1  

macOS https://www.bretfisher.com/docker-for-mac-commands-for-getting-into-local-docker-vm/

Docker for Windows https://www.bretfisher.com/getting-a-shell-in-the-docker-for-windows-vm/

Docker Toolbox docker-machine ssh default  

Windows firewalls preventing networking or bind mounts in containers
https://www.udemy.com/docker-mastery/learn/v4/questions/3258290

Anti-Virus Blocking file sharing in Windows
https://www.udemy.com/docker-mastery/learn/v4/questions/3442460

Are containers more secure than VM’s?
https://www.udemy.com/docker-mastery/learn/v4/questions/4020880

I have a network proxy and images won’t build
https://stackoverflow.com/questions/23111631/cannot-download-docker-images-behind-a-proxy/

Public vs. Private IP for Swarm advertise-addr and data-path-addr
https://www.udemy.com/docker-mastery/learn/v4/questions/3710518

Custom Docker Networks, macvlan and IP setting hardcoding
https://www.udemy.com/docker-mastery/learn/v4/questions/3706540

Installing Docker:


Installing Docker: The Fast Way
If you don't already have docker installed, Docker already has some great guides on how to do it. The rest of this Section is about how to setup 
Docker on your specific OS, but if you already know which OS you want to install on, here's the short-list for downloading it. The videos after 
this Lecture are walkthroughs of installing Docker, getting the GitHub repo, getting a code editor, and tweaking the command line if you want to. 
Feel free to skip any and all of this if you have at least docker version 17.06 and like your current setup :)

Installing on Windows 10 (Pro or Enterprise)

This is the best experience on Windows, but due to OS feature requirements, it only works on the Pro and Enterprise editions of Windows 10 
(with latest update rollups). You need to install "Docker for Windows" from the Docker Store.

With this Edition I recommend using PowerShell for the best CLI experience. See more info in the next few Lectures.

Installing on Windows 7, 8, or 10 Home Edition

Unfortunately, Microsoft's OS features for Docker and Hyper-V don't work in these older versions, and "Windows 10 Home" edition doesn't have 
Hyper-V, so you'll need to install the Docker Toolbox, which is a slightly different approach to using Docker with a VirtualBox VM. 
This means Docker will be running in a Virtual Machine that sits behind the IP of your OS, and uses NAT to access the internet.

NOTE FOR TOOLBOX USERS: For all examples that use http://localhost , you'll need to replace with http://192.168.99.100

Installing on Mac

You'll want to install Docker for Mac, which is great. If you're on an older Mac with less than OSX Yosemite 10.10.3, you'll need to install 
the Docker Toolbox instead.

Installing on Linux

Do *not* use your built in default packages like apt/yum install docker.io  because those packages are old and not the Official Docker-Built 
packages. 

I prefer to use the Docker's automated script to add their repository and install all dependencies: curl -sSL https://get.docker.com/ | sh  but 
you can also install in a more manual method by following specific instructions on the Docker Store for your distribution, like this one for 
Ubuntu.

What if None Of These Options Work

Maybe you don't have local admin, or maybe your machine doesn't have enough resources. Well the best free option here is to use 
play-with-docker.com, which will run one or more Docker instances inside your browser, and give you a terminal to use it with. 
You can actually create multiple machines on it, and even use the URL to share the session with others in a sort of collaborative experience. 
 I highly recommend you check it out.  Most of the lectures in this course can be used with "PWD", but it's only limitation really is it's time 
bombed to 4 hours, at which time it'll delete your servers.

docker-compose.yml

docker-compose up - to setup vol,netwk, start all container
docker-compose down - to stop all containers, and remove vol, netwk

If all ur proj had Dockerfile and docker-compose.yml, then 'new developer onboarding' wud be just 2 step process:
git clone github_url
docker-compose up

sample docker-compose.yml:

version:'2'

services:
   drupal:  //service-name
     image:drupal
     ports:
       "8080:80"
     volumes:
        -drupal-modules/var/www/html/modules
-drupal-profiles/var/www/html/profiles
-drupal-sites/var/www/html/sites
-drupal-themes/var/www/html/themes
   postgres:  //service-name
     image:postgres
     environment:
       -POSTGRES_PASSWORD=mypwd

volumes:
   drupal-modules:
   drupal-profiles:
   drupal-sites:
   drupal-themes:

Then run docker-compose up
See logs - it creates all netwks, volumes while starting.

To remove volumes: docker-compose down -v

To rebuild ur images if u change them - docker-compose build

In docker compose, it will always append dirname in the beginning of img file to avoid conflicts/duplicate names

To delete all images: docker-compose down --rmi local

------------
Swarm intro:
Needed when:
To automate container lifecycle
Scale in/Scale out containers
Ensure containers are re-created if they fail.
blue/green deploy means no/zero downtime of servers. That is done by replacing containers without any downtime.
Ensure that trusted servers run on containers.
Store secrets, keys, pwds to get them to the right container.

Swarm mode is a clustering solution built inside DOcker.
1.12-old version
1.13- in 2017 - newer version
Swarm mode is disabled OOTB.

Run : docker info - it will show u swarm mode is on or off.
To initialize: docker swarm init

docker service = docker run

With swarm, a new netwking driver is introduced -  overlay.
U can use it by: docker network create --driver overlay.
It creates a swarm wide bridge netwk.
This is for container to container traffic inside a single swarm.
U can also set IPSec (AES) encryption on netwk creation. But by default its off for performance reasons.

u can append 'watch' linux cmd in front of docker network ls cmd  - so every 2 sec, it will show the latest results.
If ur webapp is deployed on node1, db on node2, and smthing on node3. Then if u access in browser via ip address of all 3 nodes, for all
3 nodes, it shows the welcome msg in browser. Why ? - due to Routing Mesh

Routing Mesh-It routes the ingress (incoming) packets for service to proper task. Spans all nodes in a swarm.
Uses IPVS from linux kernel. Load balances swarm services across their tasks.
Routing Mesh works in 2 ways: 1)Front end and back end (db) dont talk to each other directly but via VIP (virtual IP) in overlay netwks.
2) External traffic incoming to published ports (all nodes listen)
Service name is by default dns name in containers.
ROuting mesh is a stateless LB. This LB is at OSI Layer 3 (TCP), not Layer 4(DNS).

docker stack deploy - used rather than docker service layer

deploy: key in compose file. Can't do build:
docker-compose cli not needed on swarm server.

docker secret create filename.txt

Kubernetes is an orchestrator similar to docker swarm.
Kubernetes is a container orchestrator.
Orchestrator is - taking all ur containers that u ask it to run, it takes a series of nodes or servers and deciding how to run those container
workloads across those nodes.
Kubernetes (K8s) was an orchestrator released in 2015 by google. And now maintained by open src community around the world.
Kubernetes is a set of APIs that run on apps in containers to manage a set of servers and then execute ur containers on docker by default.
It can run other containers runtime that aren't docker - like containerD
It gives u APIs and CLI to deploy n maintain the same server infrasturcture that u would have similar to swarm.
In kebernetes (instead of docker cmd), u wud use cube control (kubectl).
Many clouds provide it for u. Many vendors make a distribution of it.

Not all solutions need orchestration. We have used docker run before, docker compose , aws tools like ELB,etc - which are also sufficient for some
customers. So not all cust will go with this orchestration approach.
But most of us will go with container orchestration, as industry is moving with this approach.
Servers + change rate = benefit of orchestration
Top 2 solutions are : Kubernetes, Docker Swarm, Others are: ECS, CloudFoundry, Mason, Marathon - but are legacy ones, are they were  released b4 
the docker and kubernetes came into picture.
K8n and swarm - are helpful also if multi server architecture u wnat to manage on on-prem, cloud as well as in data center.
If k8n, decide which distribution: Cloud or self-managed (Docker enterprise, Rancher, OpenShift, Canonical, VMWare PKS)
U can also use the raw distribution of k8n given in github.

K8n or Swarm:
They both are container orchestrators. They both solve same problems.
Swarm is easy to deploy/manage/get started
K8n has much more functionalities, flexibility and customization. It can solve more prob in more ways, and has wider adoption in support.

Kubectl is the CLI used to talk to the kubernetes API.
Single server in a kubernetes cluster is called Node (same as in swarm)
Kubelet is the container that will run a small agent on each node, to allow that node to talk back to the Kubernetes master.
Since docker has swarm built in, it didnt need this agent.
K8n master is called the control plane - its the set of containers that manage the cluster. This master (also called master) also includes API server,
scheduler, controller mgr, etcd, and more.

Pod - one or more containers running together on a node. So, its a basic unit of deployment. Containers r always in pods.
Controller - For creating/updating pods and other objects.
Service - netwk endpoint to connect to a pod.
Namespace - Filtered grp of objs in cluster.

3 ways to create pods from kubectl CLI:
kubectl run  - changing to be only for pod creation (similar to docker run)
kubectl create  - create some resources via CLI or YAML (similar to docker create for swarm) 
kubectl apply - create/update anything via YAML (similar to stack deploy for swarm)

Two ways to deploy pods (containers) - via commands, or via YAML

To list the pods: kubectl get pods
To see all objs: kubectl get all