Toggle table of contents
- 1.1. What Is Docker
- 1.2. Docker Vs Virtual Machine
- 1.3. Installing Docker
- 1.4. Exploring Docker Dashboard
- 1.5. Tools
- 1.6. Getting Started With Docker
- 2.1. Understanding Containers
- 2.2. Docker Images And Containers
- 2.3. Managing Containers
- 2.4. Docker Ps Format
- 2.5. Exposing Ports
- 2.6. Exposing Multiple Ports
- 2.7. Naming Containers
- 2.8. Running Container In The Background
- 5.1. Volumes
- 5.2. Bind Mount Volumes
- 5.3. Bind Mount Volumes In Action
- 5.4. Using Volumes For Local Development
- 5.5. Docker Volumes
- 5.6. TMPFS Mount
- 6.1. Dockerfile
- 6.2. Creating Dockerfile
- 6.3. Building Docker Images
- 6.4. Running A Container From Custom Image
- 6.5. Investigate Container File System
- 6.6. Building ExpressJS API
- 6.7. Dockerfile And Building Image For
user-api
- 6.8. Running Container For
user-api
Image - 6.9. Exploring Dockerfiles
- 6.10. Dockerfile Reference
7. Image Tagging And Versioning
- 7.1. Pulling Images Using A Specific Tag
- 7.2. Creating Tags
- 7.3. Creating Version 2 Of The Dashboard
- 7.4. Never Run Latest In Production
- 7.5. Image Variants
- 10.1. How To Communicate Between Containers
- 10.2. Docker Network
- 10.3. MongoDB Container
- 10.4. MongoExpress
- 10.5. Understanding Container Communication
- 10.6. Another Example
- Platform for building, running and shipping applications.
- Developers can easily build and deploy applications running in containers.
- Container is like a running instance of our application, packaged up and running within a container.
- Local development is the same across any environment.
- There are scenarios where the application may work on your local machine, but doesn't work on development, staging or production environment, because of hardware issues, installation problems etc.
- Docker abstracts and solves all of those problems so that it always works on every environment and every machine.
- Docker is also used a lot for CI/CD (Continuous Integration/Continuous Delivery) workflows.
- Virtual Machines (VMs) are an abstraction of physical hardware turning one server into many servers.
- The hypervisor allows multiple VMs to run on a single machine.
- Each VM includes a full copy of an operating system, the application, necessary binaries and libraries – taking up tens of GBs.
- VMs can also be slow to boot, whereas with Docker containers are very fast to boot.
- Each VM consists of:
- Infrastructure
- Hypervisor
- VM (application) running on a Linux OS (this can have multiple copies, each one running the application).
- Docker runs your applications in something called containers.
- Containers are an abstraction at the app layer that packages code and dependencies together.
- Multiple containers can run on the same machine and share the OS kernel with other containers, each running as isolated processes in user space.
- Containers take up less space than VMs (container images are typically tens of MBs in size), can handle more applications and require fewer VMs and operating systems.
- Each container consists of:
- Infrastructure
- Host operating system
- Docker
- Application (which can have multiple copies).
- The key difference is the host operating system.
- VM has 3 copies while Docker only has 1.
- Therefore, we gain more performance using Docker.
Docker | VM |
---|---|
Portable | Requires more memory |
Requires less memory | Each VM runs on its own OS |
All containers share the same OS | Startup time in minutes |
Startup time in milliseconds | More secure |
Process level isolation | Less popular these days |
Very popular |
- Install docker on https://docker.com/ and follow installation instructions.
- Containers / Apps tab:
- This is where the applications we built will be running.
- Initially there shouldn't be anything listed.
- Images tab:
- These are the images either on your local machine or remote repositories (Docker registries).
- Volumes tab:
- This is used, so you can share data between host and containers.
- Dev Environments:
- Define your project's configuration as code, distribute your project easily amongst your team, and have everyone work on the same code and any dependencies with one click.
Useful tools and resources to know and install that will be helpful in using Docker:
- Knowledge of: terminal, bash & Vim
- https://gitforwindows.org/ (Windows)
- https://cmder.app/
- https://iterm2.com/ (Mac)
- Verify that Docker is installed:
docker --version
- This command should output the installed Docker version.
- View all available Docker commands:
docker
- Create a Hello World Docker application:
docker run -d -p 80:80 docker/getting-started
- Navigate to the browser and check if Docker is now running on localhost:
http://localhost
- List all running Docker processes:
docker ps
- There should be a
docker/getting-started
image process running.
- Stop the container from running by targeting container ID:
docker stop 9919e467353a
- Remove the entire docker container:
docker rm 9919e467353a
- Pull and run a specific Docker image:
docker run -d -p 80:80 milanobrenovic/2048
- Now test if a 2048 game is running on localhost:
http://localhost
- Container is an isolated environment for running applications.
- Contains everything your application needs, such as:
- Operating system
- Tools and binaries
- And most importantly, software (spring boot, nodejs, golang, javascript, or whatever is the backend application built on)
- When you run the command
docker run
following the image name, that gave us the application deployed on Docker.
- The command
docker ps
gives us a list of all running containers:
docker ps
- There should be 0 containers running.
- Assuming there are no running containers, let's run the
milanobrenovic/2048
image:
docker run -d -p 80:80 milanobrenovic/2048
- Now when you list all the Docker processes running:
docker ps
- There should be 1 running.
- Basically the process running here is a container.
- Execute into the running container via interactive mode using container ID:
docker exec -it 4edc54e943c5 sh
- Within the shell, list all the files and folders:
ls
- Navigate to
nginx
directory:
cd /usr/share/nginx/html
- By using
ls
in this directory, you can see all the files and folders uploaded via Docker which is then being run on http://localhost.
- Usually when building software, you have your source code, which is any code written in any programming language.
- What you (as a developer) do, is you take that code, and then you build a Docker image.
- From this Docker image, you can run a container.
- Docker image is like a template for running the application.
- From 1 Docker image, you can run multiple containers.
- A Docker container can be a javascript app, nginx, postgres or any technology that you want to use.
- In the previous examples,
milanobrenovic/2048
is the image name. - From this image name, we can run a container.
- To run a different image but on a port
8080
, use command:
docker run -d -p 8080:80 nginx
- Here we have started an image
nginx
which runs a container serving us the nginx starter template.
- List all Docker processes running:
docker ps
- Check in the browser and verify that this container is running:
http://localhost:8080
- To browse through all public Docker images, go to https://hub.docker.com/search?q=.
- Let's try to install a WordPress Docker image:
docker run -d -p 8081:80 wordpress
- Note: this will be run on port
8081
because80
and8080
are already taken and running.
- Check on localhost if WordPress Docker container is running:
http://localhost:8081
- List all running Docker containers:
docker ps
- By now it should be 3 containers: 2048, nginx and wordpress.
- Stop the
wordpress
container by container ID:
docker stop 3635573b4e19
- List ALL containers (including the ones that are not running):
docker ps -a
- Remove the
wordpress
container completely by container ID:
docker rm 3635573b4e19
- In case the container is running, you would need to stop it first and then remove.
- To remove a running container without stopping it first, use command:
docker rm -f 3635573b4e19
- To have a different format output than the one printed after using
docker ps
command, export this variable in the system environment variables:
export DOCKER_ROW_FORMAT="ID:\t\t{{.ID}}\nNAME:\t\t{{.Names}}\nIMAGE:\t\t{{.Image}}\nPORTS:\t\t{{.Ports}}\nCOMMAND:\t{{.Command}}\nCREATED:\t{{.CreatedAt}}\nSTATUS:\t\t{{.Status}}\n"
- To apply this new format, use command:
docker ps --format=$DOCKER_ROW_FORMAT
- Currently, we have a container which is running a 2048 game, which is based off nginx image, and it's listening on port 80.
- Sometimes we may want to expose the application to users.
- It could be a React application, just a web browser, pretty much any client.
- The client, in order to access the application, needs to talk to the container.
- Container then exposes port 80, because nginx is listening on port 80.
- This allows to issue a request from the client as
http(s)://ip-address:80
. - The command
-p 80:80
sets the port:- The first port
80
refers to the host. - The second port
80
is the container.
- The first port
- List all running Docker containers:
docker ps
- Remove the
milanobrenovic/2048
container:
docker rm -f beautiful_swirles
- Note: you can target a container by its randomly generated name instead of container ID.
- Run a container and expose multiple ports on the host:
docker run -p 80:80 -p 4200:80 -p 3000:80 -d milanobrenovic/2048
- Now test all localhost + port addresses and verify that it works:
http://localhost
http://localhost:4200
http://localhost:3000
- All of these urls should open the same container.
- List all running Docker containers:
docker ps
- Remove the
milanobrenovic/2048
container:
docker rm -f 0bdd84f3b8f5
- Run
milanobrenovic/2048
again but this time give the container a name:
docker run --name 2048 -d -p 80:80 milanobrenovic/2048
- List all running Docker containers and verify that the name was changed:
docker ps
- This means we can remove the container by the given name instead of the randomly generated one or the container ID:
docker rm -f 2048
- Remove the
nginx
container:
docker rm -f 9c9b8100a7e6
- Recreate
nginx
container but with its own name:
docker run --name website -d -p 8080:80 nginx
- List all the containers:
docker ps
- Remove the
website
container:
docker rm -f website
- Running the container without
-d
will print the logs on the terminal screen:
docker run --name website -p 8080:80 nginx
- This will still work, but the terminal window can't be used because the container is running in the foreground and not in background.
- That is why the container should be ran
-d
most of the time, which will run it in background mode:
docker run --name website -d -p 8080:80 nginx
- A Docker image is a file used to execute code in a Docker container.
- Set of instructions to build a Docker container.
- From a single Docker image, we can run multiple containers.
- Contains:
- Application code
- Libraries
- Tools
- Everything needed to run your application
- Docker image is like a blueprint from which we can run multiple instances (containers) of the application we're building.
- List all docker images:
docker image ls
- If the
website
container exists, delete it:
docker rm -f website
- Run the website container again:
docker run --name website -d -p 8080:80 nginx
- Here we can notice how quickly it pulled the container, because the image was already installed in the local machine, so it just uses the existing image.
- To delete the image:
docker image rm nginx
In case there's a conflict and the image can't be removed, it's probably because the container is already running, so delete the container:
docker rm -f website
Now remove the image again:
docker image rm nginx
- Run the website container again:
docker run --name website -d -p 8080:80 nginx
- This time it should take longer to start the container because the image doesn't exist locally, so it has to pull it from Docker repository.
- Remove the
website
andnginx
containers:
docker rm -f website
docker rm -f nginx
- The
docker run ...
command pulls the image first and then runs the container, but if you want to just pull the image without running it, use command:
docker pull nginx
- Inspect a specific image:
docker image inspect nginx
- Docker architecture follows the client-server approach.
- The client is the CLI that we've been using.
- The server is the Docker Host.
- There are also Registries (place from which we use the public/private images).
- Inside the Docker host, there is something called a Docker Daemon.
- Docker Daemon is responsible for handling the requests from the client, such as:
docker build
docker run
docker pull
- Let's say we want to run a container, if the container is not present on the local host, the Docker daemon goes and fetches it from the registry and stores it on the host.
- Once it fetches the image, we can run containers from the image.
- For example one container for spring boot, the other container for postgres database etc.
- When the client issues commands to the Docker daemon, this communication is transferred via the Docker Sock.
- Docker Sock is basically a unique socket.
- The Docker client communicates usually with the daemon via the unix socket
/var/run/docker.sock
. - A UNIX socket, is an inter-process communication mechanism that allows bidirectional data exchange between processes running on the same machine.
- Allows data to be shared between containers and host.
- Data can be kept after container dies.
- Let's say we have 2 containers, and we want to share data between them – this should be done through volumes.
- The types of files to share through volumes would typically be:
- Certificates
- Config files
- Folders
- Anything you want
- Run bash image from Docker, insert text into the file and print it:
docker run bash bash -c "echo foo > bar.txt && cat bar.txt"
- Here we have pulled the
bash
from Docker repository. - Then using the
-c
(command), an echo of text "foo" was inserted intobar.txt
, and usingcat bar.txt
it read the contents of that .txt file, printing out just "foo".
- Run the exact same image but without echoing any text into the file:
docker run bash bash -c "cat bar.txt"
- It should return an error because it can't find the
bar.txt
file.
- List all containers INCLUDING the ones that are not running:
docker ps -a
- There should be bash containers with status
Exited
. - When we try to just read the contents of
bar.txt
it can't because each container is separate and shut down. - This is one of the reasons why we need to use volumes.
- First we have a host.
- Host is your operating system that is running Docker.
- Inside the host, let's say that we have:
- Container
- Filesystem
- Memory (RAM)
- When it comes to volumes, we can have something called a bind mount.
- This allows the host to share its own file system using the command
-v host-path:container-path
.
- This allows the host to share its own file system using the command
- To get a help of all
docker run
commands:
docker run --help
- There should be a
-v
command to bind mount a volume.
- To bind mount a volume, we'll use bind-mount directory:
docker run -v $PWD/bind-mount:/tmp bash bash -c "echo foo > /tmp/bar.txt && cat /tmp/bar.txt"
- In case the mount is denied, in Docker Desktop software go to:
- Settings
- Resources
- File sharing
- Add a resource file path to bind-mount directory
- Otherwise, Docker should create a
bar.txt
file with contents offoo
.
- Let's try to read the contents of that same file:
docker run -v $PWD/bind-mount:/tmp bash bash -c "cat /tmp/bar.txt"
- This should now print the contents successfully.
- Let's pick some free admin template such as https://startbootstrap.com/themes.
- Run the
nginx
container and name itdashboard
:
docker run --name dashboard -d -p 8080:80 nginx
- Verify on localhost that it works:
http://localhost:8080
- Delete this container:
docker rm -f dashboard
- Run a container but using the dashboard directory which contains an admin dashboard template:
docker run --name dashboard -v $PWD/dashboard:/usr/share/nginx/html -d -p 8080:80 nginx
- Verify on localhost this admin dashboard template is running:
http://localhost:8080
- Make some changes inside the index.html file at line 38 by changing the title to something else.
- These changes should take effect immediately on the browser now because we have mounted a volume.
- Whatever changes that we make will be reflected on the host, and vice versa.
- Inside the Filesystem, there is an area which is used specifically for Docker.
- What we can do is from our container, we can create a volume directly into this area.
- The key thing here is that this Filesystem is managed by Docker itself, so we have no control over it.
- Create a volume and name it
vol1
:
docker volume create vol1
- Inspect this newly created volume:
docker volume inspect vol1
- List all volumes:
docker volume ls
- There should be
vol1
we created just now.
- To remove a specific volume:
docker volume rm vol1
- There are 3 different types of volumes:
- Bind mount
- This is when you take a portion of your file system, and you share it between the containers.
- Docker Volume
- This is using a portion on the file system which is specifically for the Docker itself.
- TMPFS mount
- TMPFS mount is used when you want to mount to the RAM, basically a temporary storage.
- Bind mount
- By now, we've seen how from code we can create a Docker image, which then runs containers.
- The images used by now were images that someone built for us to use, such as
nginx
,milanobrenovic/2048
, etc. - To create a Docker image from our code, we need a Dockerfile.
- Dockerfile is a set of commands used to assemble a Docker image.
- List all containers including those which are not running:
docker ps -a
- There should be a container with
NAMES
column labeled asdashboard
, if not run the command:
docker run --name dashboard -v $PWD/dashboard:/usr/share/nginx/html -d -p 8080:80 nginx
- If the container is not running, run it:
docker start dashboard
- Instead of mounting to the volume, what we want to do is create a Docker image that will contain everything regarding
dashboard
template, so we don't have to mount anything.
- To solve this, create a Dockerfile along with instructions to build that Docker image.
- List all images:
docker image ls
- Build a Docker image using the Dockerfile:
docker build dashboard/. -t dashboard
-t
is a tag and allows us to just give a name to this image that's being created.
- To confirm that this has worked, run this command:
docker image ls
- Under
REPOSITORY
column there should be an image calleddashboard
.
- First remove the existing
dashboard
container if it exists:
docker rm -f dashboard
- List all images:
docker images
- We no longer want to take the
nginx
image, we want to takedashboard
image this time.
- Run the container but from
dashboard
image we just built:
docker run --name dashboard -d -p 8080:80 dashboard
- List all containers:
docker ps
- Navigate to localhost and verify that the admin dashboard template is still working:
http://localhost:8080
- Note: this is being run from
dashboard
image. - This way we don't have to mount any volume.
- Execute into the
dashboard
container via interactive mode:
docker exec -it dashboard sh
- Navigate to the nginx html directory:
cd /usr/share/nginx/html
This is where the entire dashboard
image is uploaded, list everything:
ls
- Generate a starter ExpressJS app, full documentation at: https://expressjs.com/en/starter/installing.html.
- Create a server.js ExpressJS starter file with 2 example routes.
- One option is to install express locally, but since we have Docker we don't have to, just run the command:
docker run -w /src -v $PWD/user-api:/src --rm node npm init --yes
-w /src
- This will create a folder inside the container called
src
.
- This will create a folder inside the container called
-v $PWD/user-api:/src
- Mounts a volume called
$PWD/user-api
into the folder we just created called/src
.
- Mounts a volume called
--rm
- Removes the container when it exists (so we don't have to manually do
docker rm ...
).
- Removes the container when it exists (so we don't have to manually do
node npm init --yes
- Node is the image name from Docker registry.
- Then we install the required npm packages and approve everything necessary to be installed with
--yes
command.
- There should be package.json file generated now.
- Now let's install Express:
docker run -w /src -v $PWD/user-api:/src --rm node npm i -S express
- Add a Dockerfile with instructions to build an image of this ExpressJS backend app.
- Build this image from the Dockerfile:
docker build user-api/. -t user-api
- List all images and confirm that
user-api
is showing:
docker images
- Run the image we just built:
docker run --name user-api -d -p 3000:3000 user-api
- List all containers and verify that
user-api
is running now:
docker ps
- Navigate to the localhost and verify that the backend API routes work:
# Outputs `Hello World!`
http://localhost:3000
# Outputs a JSON array of user objects
http://localhost:3000/api/v1/users
- On Docker Hub we can search for all the built Docker images: https://hub.docker.com/search?q=.
- For example, postgres:
- https://hub.docker.com/_/postgres
- Scrolling down below there's a version
15
. - It should open a
Dockerfile
which has Docker commands, running shell scripts and all of it begins withFROM
command.
- Pro-tip: to learn how to write Dockerfiles, it is highly advisable to look at how other people write Dockerfiles.
- Official documentation reference regarding Dockerfile: https://docs.docker.com/engine/reference/builder/
- Pull the
postgres
image:
docker pull postgres
- Notice how it's using the default tag: latest.
- This means that it's downloading the latest version of
postgres
image.
- List all images:
docker images
- There should be a
postgres
image.
- Pull a specific image version by targeting a tag:
docker pull postgres:14beta2
- List all images again and verify the tag version is now
14beta2
:
docker images
- Create a new tag for
dashboard
image based off thelatest
tag:
docker tag dashboard:latest dashboard:v1
- This should now create a
v1
version of ourdashboard
image.
- List all images and confirm if there is a
dashboard
image withv1
tag:
docker images
- Remove a specific image with the specific tag:
docker rmi dashboard:v1
Also delete the whole dashboard image:
docker rm -f dashboard
- Build a
dashboard
image but with 2 different tags:
docker build -t dashboard:latest -t dashboard:v1 dashboard/.
- List all images and confirm if there is a
dashboard
image withv1
tag:
docker images
- Run the
v1
container:
docker run --name dashboard-v1 -d -p 8080:80 dashboard:v1
Now run the latest
container but under port 8081
:
docker run --name dashboard-latest -d -p 8081:80 dashboard:latest
- Verify on localhost that both instances are the exact same thing in this case:
http://localhost:8080
http://localhost:8081
- Update index.html at some parts just to make it distinct that it's a version 2.
- Build this
v2
image:
docker build -t dashboard:latest -t dashboard:v2 dashboard/.
- List all images and verify
v2
image exists:
docker images
- Remove the
dashboard-latest
container:
docker rm -f dashboard-latest
- Run the
dashboard-latest
again:
docker run --name dashboard-latest -d -p 8081:80 dashboard
- Check if the
dashboard-latest
changes have been reflected on localhost:
http://localhost:8081
- Now run the
v2
on port8082
:
docker run --name dashboard-v2 -d -p 8082:80 dashboard:v2
- Check if the
dashboard-v2
changes have been reflected on localhost:
http://localhost:8082
- It should be the same as
v1
since nothing was changed.
- Never run the
latest
version of custom images in production. - This is because if your latest software has changes, for example let's say that you're running Kubernetes, or a VM and the VM restarts, it would pull the image and always pull the latest image.
- Therefore, leaving you without control with the image that you're running in production.
- Always stick to using a tag.
- Take a look at for example Node Image.
- There are several supported tags (versions of Node).
- More information regarding differences between these versions can be found on this Medium Article.
- Some versions are:
- stretch/buster/jessie
- Written for different Debian releases.
- -slim
- Paired down version of the full image.
- Only installs minimal packages needed to run your particular tool.
- Leaves out lesser-used tools and binaries that you don't need.
- -alpine
- Based on the Alpine Linux Project.
- Operating system that was built for use inside of containers.
- Has a really tiny size.
- However, some teams are moving away from alpine because these images can cause compatibility issues that are hard to debug.
- stretch/buster/jessie
- Docker registry is a storage and distribution system for Docker images, such as Dockerhub.
- There can be 2 types of images:
- Public
- Publicly available, anyone can pull those images.
- Private
- Images that you have full control, and only you can pull those images.
- Public
- The command
docker pull
fetches a Docker image from Docker registry to our local machine (the host). - Any image stored on the local machine can be pushed back to the Docker registry using the command
docker push
. - Most popular Docker registries are:
- Sign into Docker Hub: https://login.docker.com/u/login
- Try to pull one of the public images:
docker pull milanobrenovic/kubernetes:frontend-v1
- This is a public image and anyone can pull it without authenticating.
- Now try to pull a private image:
docker pull milanobrenovic/private-frontend
- This should fail if you're not authenticated into the
milanobrenovic
Docker Hub account.
- To authenticate, use command:
docker login -u 'milanobrenovic' -p '<password>'
Note: you can view all the login commands with:
docker login --help
- View the Docker login configuration:
cat ~/.docker/config.json
- There should be a json key with
credsStore
.
{
"credsStore": "desktop"
}
desktop
means you will be prompted to enter username/password manually when you want to push a repository to Docker Hub.osxkeychain
is only for Mac and means it will automatically use the username/password saved in Key Chain Access software, without you having to manually login.
- List all images:
docker images
- Make sure there is a
user-api
repository available locally.
- Since it exists on the local machine only, we have to re-tag it to the Docker Hub repository
milanobrenovic/user-api
:
docker tag user-api:latest milanobrenovic/user-api:latest
- List all images again:
docker images
- Verify that there is an image from repository
milanobrenovic/user-api
this time.
- Push this local image to Docker Hub:
docker push milanobrenovic/user-api:latest
- Get more information about a specific container:
docker inspect dashboard-v1
- View logs for a specific container:
docker logs dashboard-v1
- View logs in real-time to watch changes as they happen:
docker logs dashboard-v1 -f
- Execute into the container
user-api
and list all its environment variables:
docker exec user-api env
- List all the files in the working directory:
docker exec user-api ls
- Get working directory:
docker exec user-api pwd
- Get root directory:
docker exec user-api ls /
- Execute into
user-api
container shell via interactive mode:
docker exec -it user-api sh
- From inside the running container, you can use all the Linux commands such as:
pwd
working directory.ls
list all files and folders.cd /
change directory to root.top
see all top processes running within this container.df
check disk space usage.- ...
- If the container doesn't support
sh
, you can alsobash
into it:
docker exec -it user-api bash
- Let's say that we have a Mongo Express container and a MongoDB container.
- Mongo Express is a GUI client that allows to connect to the Mongo database, so you can see all the collections, documents, perform queries etc.
- To connect these two containers together, using localhost will NOT work.
- That is because each container is self-contained for itself, and it only knows about the services running inside of that container.
- To have these two containers talk to each other, we have to use Docker Network.
- A network needs to be created and attached to these two containers.
- When containers want to talk to each other, they just refer to the container name itself.
- To allow containers to talk to each other, create a network called
mongo
:
docker network create mongo
- List all networks:
docker network ls
- Confirm that there is a
mongo
network listed.
- To remove a network, use command:
docker network rm mongo
- Inspect a specific network:
docker network inspect mongo
- The default network driver should be
bridge
. - Full documentation to learn about all the network drivers can be found at: https://docs.docker.com/network/
- Full documentation for Docker
mongo
image:
- List all networks:
docker network ls
- Confirm that there is a
mongo
network listed.
- Run this newly created network:
docker run --name mongo -d -p 27017:27017 --network mongo -e MONGO_INITDB_ROOT_USERNAME=username -e MONGO_INITDB_ROOT_PASSWORD=secret mongo:5.0.15
--name mongo
- Give this container a name of "mongo".
-d
- Run this container in detach mode (background mode).
-p 27017:27017
- Run the MongoDB on port 27017.
--network mongo
- Switch to mongo network.
-e MONGO_INITDB_ROOT_USERNAME=username
- Set the environment variable of database username to "username" (to keep it simple).
-e MONGO_INITDB_ROOT_PASSWORD=secret
- Set the environment variable of database password to "secret".
mongo:5.0.15
- Use the "mongo" Docker image to create this container and target the 5.0.15 version of that image.
- View logs of the
mongo
container:
docker logs mongo
- Notice at the bottom how it's "Waiting for connections", listening on "localhost" and port "27017".
- Full documentation for Docker
mongo-express
image:
- Run the
mongo-express
container:
docker run --name mongo-express -d -p 8081:8081 --network mongo -e ME_CONFIG_MONGODB_ADMINUSERNAME=username -e ME_CONFIG_MONGODB_ADMINPASSWORD=secret -e ME_CONFIG_MONGODB_SERVER=mongo mongo-express
--name mongo-express
- Give this container a name of "mongo-express".
-d
- Run this container in detach mode (background mode).
-p 8081:8081
- Run the Mongo Express on port 8081.
--network mongo
- Switch to mongo network.
-e ME_CONFIG_MONGODB_ADMINUSERNAME=username
- Set the environment variable of database admin username to "username" (has to be the same as MongoDB).
-e ME_CONFIG_MONGODB_ADMINPASSWORD=secret
- Set the environment variable of database admin password to "secret" (has to be the same as MongoDB).
-e ME_CONFIG_MONGODB_SERVER=mongo
- Set the environment variable of server name to "mongo" (has to be the same as the MongoDB container name).
mongo-express
- Use the "mongo-express" Docker image to create this container and target the latest version of that image.
- View logs of the
mongo-express
container:
docker logs mongo-express
- It should print out that server is open to allow connections from anyone.
- Test that it works on localhost:
http://localhost:8081
- It should display a MongoDB GUI.
- MongoExpress container should now successfully communicate with MongoDB container, via the same network.
- MongoExpress when connecting to MongoDB uses
http://mongo:27017
. - The word
mongo
here is thehost
. - Host is the same as Container Name.
- Execute into the
mongo
container shell via interactive mode:
docker run --rm -it mongo sh
--rm
removes the container as soon as we exit interactive mode.
- Within the container shell, type:
mongosh
- It should fail to connect because there is nothing running inside of this container.
- Within the shell, connect to the MongoDB container:
mongosh --host mongo -u username -p secret
--host mongo
- Instead of using localhost, we need to use the container name.
-u username
- Is the username set in MongoDB container.
-p secret
- Is the password set in MongoDB container.
- However, this should still fail to connect because the host (
mongo
) is not found. - This is because we are not using the network.
- Exit the shell and run the same command but this time targeting the
mongo
network:
docker run --rm --network mongo -it mongo sh
- Now within the shell try to connect to the MongoDB:
mongosh --host mongo -u username -p secret
- This should now connect to the database successfully.
- List all databases within the shell:
show databases
- Execute into the
dashboard-v1
container shell via interactive mode:
docker exec -it dashboard-v1 sh
- Try to fetch the
user-api
container GET request from the API:
curl http://user-api:3000/api/v1/users
- It should fail because these two containers are not running under the same network.
- Create a new network:
docker network create test
- Connect the network to a specific container:
docker network connect test user-api
docker network connect test dashboard-v1
- Execute into the
dashboard-v1
container shell again via interactive mode:
docker exec -it dashboard-v1 sh
- Try to fetch the
user-api
container GET request again from the API:
curl http://user-api:3000/api/v1/users
- Now it should fetch the API json successfully.
- To disconnect a container from a specific network:
docker network disconnect test dashboard-v1
- Note: although you can, don't have to disconnect
user-api
because it is not talking todashboard-v1
container.
- To delete a specific network:
docker network rm test
- Docker Compose is a tool for defining and running multi-container Docker applications using a single yaml file.
- Instead of having to run several individual Docker commands, we can have 1 file and run 1 command, and have all of the resources defined in the file created by us.
- To destroy all resources created by that file, we can just use the same file and teardown everything.
- View all the commands of Docker Compose:
docker compose --help
Note:
docker-compose
is v1.docker compose
is v2.- The v1 will be deprecated and removed from the end of June 2023, so use only Docker Compose v2.
- Let's define a file that will enable us to start both MongoDB instance and MongoExpress, and have them talk to each other.
- Create a docker-compose.yml file, it must be named exactly like that.
- In order for communication to happen between
mongo
andmongo-express
containers, we need to add them to the same network within Docker Compose yaml file.
- In docker-compose.yml define
networks
and apply it to both containers.
- First off delete
mongo
andmongo-express
containers if they already exist:
docker rm -f mongo
docker rm -f mongo-express
- Remove the
mongo
network:
docker network rm mongo
- Use Docker compose to build the containers:
cd docker-compose/
docker compose up
- Everything should be up and running now, verify on localhost that it works like before:
http://localhost:8081
- However, this process has been started in the foreground.
- If you close it, localhost won't work anymore.
- To run Docker compose in the background, use command
-d
:
cd docker-compose/
docker compose up -d
- View all Docker compose commands:
docker compose --help
- To target a specific Docker compose directory and build from it, use command:
docker compose -f docker-compose/docker-compose.yml up -d
- List all containers started by Docker compose:
docker compose -f docker-compose/docker-compose.yml ps
- List running Docker compose projects:
docker compose -f docker-compose/docker-compose.yml ls
- Stop everything that has been started by docker compose file:
docker compose -f docker-compose/docker-compose.yml stop
- Start everything again:
docker compose -f docker-compose/docker-compose.yml start
- To delete everything:
docker compose -f docker-compose/docker-compose.yml down
- NOTE: this will delete everything, including all the database and data stored prior.
- To install everything back again, use the standard command:
docker compose -f docker-compose/docker-compose.yml up -d
- View logs of a specific container:
docker compose -f docker-compose/docker-compose.yml logs mongo
- Or view logs and also watch for real-time changes:
docker compose -f docker-compose/docker-compose.yml logs mongo -f
- Volumes are used to keep the data persisted.
- In docker-compose.yml, create a
volumes
with empty data. - Apply these changes:
docker compose -f docker-compose/docker-compose.yml up -d
- Create a new database via localhost:
http://localhost:8081
- List all volumes:
docker volume ls
- There should be
docker-compose_data
which was created by Docker compose.
- Full documentation for Docker Compose:
- The command
docker scout cves
scans your images for vulnerabilities. - When you're building software you want to make sure the containers you're building are free from security attacks.
- There are tools to detect any vulnerabilities your images might have.
- View all commands of
scout cves
:
docker scout cves --help
- NOTE: the old version command used to be
docker scan
.
- Scan a specific image:
docker scout cves node
- Full documentation for Trivy:
- Detects 2 types of security issues:
- Vulnerabilities for your images.
- Misconfigurations.
- Scans 3 different artifacts:
- Container images
- Filesystem
- Git repositories (remote)
- Virtual Machine (VM) image
- Kubernetes
- AWS
- Can be run in 2 different modes:
- Standalone
- Client/Server
- To install Trivy, go to https://aquasecurity.github.io/trivy/v0.38/getting-started/installation/.
- Run Trivy using Docker:
docker run -v /var/run/docker.sock:/var/run/docker.sock -v $HOME/Library/Caches:/root/.cache/ aquasec/trivy:0.38.3 image node
- Note: this command was used from the official Trivy site during installation.
- Full documentation for Distroless Images:
- When building applications and containerizing them, best practice is to use distroless images in production.
- Distroless images contain only your application and its runtime dependencies.
- They do not contain package managers, shells or any other programs you would expect to find in a standard Linux distribution.
- Best practice employed by Google and other tech giants that have used containers in production for many years.
- Improves the signal-to-noise of scanners (e.g. CVE) and reduces the burden of establishing provenance to just what you need.
- If you use a distroless image, then you only have your application as well as any dependency it needs in order to run.
- Distroless images are very small.
- The smallest distroless image is around 650 kB.
- Scan the distroless image:
docker scout cves gcr.io/distroless/static
- There should be no vulnerabilities found.
- Full documentation for best practices and common mistakes regarding Docker: