- Overview
- Key Components and Their Roles
- First Basics 😄
- Docker
- Docker Images
- Docker Compose
- Docker Volumes
- The Network
- Project Tips
- Resources
The Inception Project is a sophisticated web infrastructure, all running on the same Docker network, orchestrated using Docker and Docker Compose.NGINX manages incoming web traffic, serving static files directly and forwarding dynamic content requests toPHP-FPM, which processes PHP code fromWordPress.WordPress usesRedis for caching frequently accessed data, enhancing performance by reducing database queries toMariaDB, which handles all content data storage and management. After PHP-FPM processes the request and retrieves data fromRedis andMariaDB, the content is returned toNGINX for delivery to the user. Additionally,NGINX serves aStatic Website for direct content delivery.Adminer provides database management forMariaDB, andPortainer oversees and monitors the Docker containers running these services. Docker volumes ensure persistent storage and efficient data management, all within a unified Docker network that facilitates seamless communication and operation across the entire system.
The components of the Inception project are interconnected and managed through Docker Compose, which facilitates the setup and orchestration of the multi-container application. Here’s a breakdown of how the various services are configured to work together:
Component | Role |
---|---|
NGINX | Acts as the web server and reverse proxy, handling incoming web requests efficiently. Serves static content and forwards dynamic content requests toPHP-FPM. Optimizes web performance and ensures secure connections through SSL/TLS configuration. |
WordPress withPHP-FPM | Forms the core of the dynamic content management system (CMS).PHP-FPM processes PHP scripts, enablingWordPress to generate dynamic web pages based on user interactions, templates, and plugins. Essential for serving personalized content to users. |
MariaDB | Serves as the relational database management system (RDBMS) forWordPress. Stores all structured data generated byWordPress, including posts, pages, comments, and settings. Ensures data persistence and integrity. |
Redis | Acts as an in-memory data structure store, used as a database, cache, and message broker. EnhancesWordPress performance by caching frequently accessed data, reducing load times, and improving the overall user experience. |
Static Website | Represents a simple website hosted alongside theWordPress site. Demonstrates the capability to serve static content efficiently, showcasing the versatility of theNGINX server in handling different types of web content. |
Portainer | Provides a graphical interface for managing Docker containers, images, networks, and volumes. Simplifies the administration of the Docker environment, making it easier to monitor and manage the infrastructure components. |
Adminer | Introduces a web interface for database management, supporting operations such as viewing and editing databases, tables, and records. ComplementsMariaDB by providing an accessible way to interact with the database directly from a web browser. |
-
A custom Docker network named
inception
is created to enable seamless communication between all containers. This network ensures that services can find and communicate with each other using their container names as hostnames. -
Each component is configured to communicate through the
inception
network, ensuring a cohesive and functional web infrastructure. Docker Compose handles the orchestration, including the creation of networks, volumes, and service dependencies, simplifying the setup and management of the project.
Virtualization involves creating virtual versions or representations of computing resources—such as servers, storage devices, operating systems (OS), or networks—that are abstracted from the underlying physical hardware. This abstraction allows for greater flexibility, scalability, and agility in managing and deploying resources. Essentially, it lets you run multiple virtual computers on a single physical machine, making it like having several computer-generated computers from one set of hardware and software.
A hypervisor is software that enables the creation and management of virtual computing environments. It acts as a lightweight layer, either software or firmware, that sits between the physical hardware and the virtualized environments. This layer allows multiple operating systems to run concurrently on a single physical machine by abstracting and partitioning the underlying hardware resources—such as CPUs, memory, storage, and networking—and allocating them to the virtual environments. Essentially, the hypervisor serves as the middleman, channeling resources from your physical infrastructure to various virtual instances. Hypervisors are crucial to virtualization technology, enabling efficient utilization and management of computing resources.
Virtual machines (VMs) are simulated computing environments that run on physical hardware. They enable multiple operating systems and applications to operate independently on a single physical server. Each VM functions as a separate computer, with its own operating system, resources (such as CPU, memory, and storage), and applications. VMs allow for efficient use of hardware resources, simplify system management, and provide increased flexibility in deployment and scalability.
-
Containers
are a form of virtualization that allows you to run applications in isolated environments. They package an application and its dependencies into a single unit that can run consistently across various computing environments. This ensures that the application will work the same way regardless of where it's deployed. -
Containers leverage several key Linux features to provide isolation and resource management. Here are the main features:
-
Namespaces: These provide process isolation by creating separate environments for containers. Each container gets its own namespace for different aspects:
- PID Namespace: Isolates process IDs, so processes in one container cannot see or interact with processes in another.
- Network Namespace: Provides each container with its own network stack, including IP addresses and network interfaces.
- Mount Namespace: Isolates the file system, so containers have their own views of the filesystem, independent of the host.
- UTS Namespace: Isolates hostname and domain name settings, allowing containers to have their own hostname.
- IPC Namespace: Isolates inter-process communication resources, such as shared memory segments.
- User Namespace: Allows containers to have their own user and group IDs, enhancing security by mapping container users to different IDs on the host.
-
Control Groups (cgroups): Manage and limit the resources allocated to containers. They provide mechanisms to:
- Limit Resource Usage: Set limits on CPU, memory, disk I/O, and network bandwidth for containers.
- Monitor Resource Usage: Track and report resource usage to manage and optimize performance.
-
Filesystem Layers: Containers use layered filesystems to build images. Each layer represents a set of changes (e.g., added files or modified configurations), and these layers are stacked to create a complete image. This allows for efficient storage and sharing of common layers across different containers.
-
Container Runtime: Manages the lifecycle of containers, including starting, stopping, and monitoring. Examples include Docker and containerd. The runtime interacts with namespaces, cgroups, and filesystems to provide container functionality.
-
These features work together to provide the isolation, resource management, and efficiency that containers are known for.
Aspect | Virtual Machines (VMs) | Containers |
---|---|---|
OS | Full OS (includes application and dependencies) | Shares host OS kernel (includes only application and dependencies) |
Isolation | Strong isolation, each VM is a separate environment | Moderate isolation, containers share the same OS kernel |
Resource Usage | More resource-intensive, needs separate OS for each VM | Lightweight, uses fewer resources |
Boot Time | Longer boot time due to full OS initialization | Fast startup, often in seconds |
Use Cases | Suitable for running different OS or strong isolation needs | Ideal for microservices, CI/CD, and scalable applications |
-
Docker
provides a comprehensive platform and suite of tools that have transformed the way applications are developed, shipped, and run. Built on the concept of containerization, Docker encapsulates applications and their dependencies into self-sufficient containers. This approach ensures that applications run consistently across different environments, from development to production. Docker simplifies container creation, management, and orchestration, making it accessible to developers and operations teams. -
Before Docker evolved to developing its own container runtime, libcontainer, which now powers Docker containers; Docker utilized LXC to provide an easier way to create , deploy and run applications using containers. Offering a lighter, faster, and more agile way of handling applications , Docker sets the standard for modern application deployment and management.
Aspect | LXC | Docker |
---|---|---|
Container Format | Varies widely; no standard format | Standardized format with Docker images and Dockerfiles |
Ease of Use | More complex setup requiring detailed OS configuration knowledge | Simplified setup with pre-built packages and extensive documentation. |
Portability | Challenging to ensure consistency across environments | "Build once, run anywhere" consistency |
Ecosystem | Fragmented tools; different solutions for building, sharing, and orchestrating | Comprehensive ecosystem: Docker Hub, Docker Compose, Docker Swarm, Kubernetes |
Layered Filesystem | Often included redundant data; updates could be cumbersome | Layered filesystem for efficient storage and faster builds |
Isolation and Security | Varies; manual configuration required | Improved isolation with integrated namespaces and cgroups in a cohesive platform that abstracts their complexity which is secure by default |
Use Cases | Efficient access to hardware resources, Virtual Desktop Infrastructure (VDI) | Streamlined deployment, Microservices architecture, CI/CD pipelines, Extensive image repository and configuration management |
Component | Description |
---|---|
Docker Desktop | Known for its user-friendly interface, Docker Desktop simplifies tasks in building, running, and managing containers. |
Docker Engine | The core runtime component of Docker shipped in Docker Desktop provides a lightweight and secure environment for running containerized applications. |
Docker Scout | Delivers near real-time actionable insights, making it simple to secure and manage the software supply chain end-to-end. |
Docker Hub | The world’s largest and most widely used image repository, Docker Hub serves as the go-to container registry for developers to share and manage containerized applications securely. |
Docker Build Cloud | A premium service that enhances the image-building process in enterprise environments. |
- The runtime is the component that actually runs the containers. It manages the execution and isolation of containers, ensuring they run as lightweight, standalone units.
- The Docker daemon (or engine) is the core service that runs in the background and manages all Docker objects, including containers, images, networks, and volumes. It listens for Docker API requests and performs actions to manage your containers.
- Tools like Docker Swarm or Kubernetes fall under this category. They manage and coordinate the deployment, scaling, and operations of containerized applications across multiple hosts, ensuring high availability and efficient resource utilization.
- The Docker CLI is the command-line tool that allows users to interact with the Docker daemon. It provides commands to build, run, and manage containers, images, networks, and volumes.
- The builder is responsible for creating Docker images from Dockerfiles. It packages applications and their dependencies into a portable image format that can be shared and deployed across different environments.
- A registry is a storage and distribution system for Docker images. Docker Hub is the most well-known registry, but private registries can also be set up. Registries store Docker images and allow them to be retrieved by other users or systems.
Simply, Docker images encapsulate everything needed to run an application in a container. They are built from Dockerfiles, stored in registries, and versioned for easy management and distribution. The immutability of images ensures consistency across different environments, making them a crucial part of the Docker containerization ecosystem.
Docker images are the basis of containers. They are read-only templates with instructions for creating a Docker container. An image is a lightweight, standalone, and executable package that includes everything needed to run a piece of software, such as code, runtime, libraries, environment variables, and configuration files. An image typically contains a union of layered filesystems stacked on top of each other.
- Base Image: The starting point for creating a Docker image, typically an operating system or a minimal image with essential packages.
- Layers: Images are built in layers. Each layer represents a set of changes (like added files or configurations) on top of the previous layer. Layers are cached, making subsequent builds faster.
- Dockerfile: A text file with a series of instructions used to build a Docker image. It specifies the base image, adds files, sets environment variables, and defines commands to run.
Docker Compose is a tool for defining and running multi-container Docker applications. It uses a simple YAML file docker-compose.yml
to configure application services, networks, and volumes, enabling the orchestration of complex applications with a single command.
docker-compose up
: Starts all services defined in the docker-compose.yml
file. It creates containers, networks, and volumes as specified.
docker-compose down
: Stops and removes all containers, networks, and volumes created by docker-compose up.
docker-compose build
: Builds or rebuilds the services defined in the Compose file, using the Dockerfile specified for each service.
docker-compose logs
: Displays logs from the running services, helping with debugging and monitoring.
-
Volumes are Docker’s method for persisting data generated by and used by Docker containers. Volumes allow data to be shared between containers and retained across container restarts and lifecycles, providing a reliable way to manage persistent storage.
-
You can manage volumes using Docker CLI commands. Here are some examples: Create a volume:
docker volume create my-vol
List volumes:docker volume ls
Inspect a volume:docker volume inspect my-vol
Remove a volume:docker volume rm my-vol
-
There are three types of volumes: host, anonymous, and named:
- A host volume lives on the Docker host's filesystem and can be accessed from within the container.
- A named volume is a volume which Docker manages where on disk the volume is created, but it is given a name.
- An anonymous volume is similar to a named volume, however, it can be difficult to refer to the same volume over time when it is an anonymous volume. Docker handles where the files are stored.
-
In Docker, managing volumes and other storage options involves various drivers and mount types that determine how data is handled and where it's stored
Driver | Description | Usage |
---|---|---|
Default (local) | Stores data on the host filesystem. It is the default driver for volumes. | docker volume create my_volume |
Custom Drivers | Third-party or custom drivers for advanced features like cloud storage or networked file systems. | docker volume create --driver custom_driver my_volume |
Mount Type | Description | Usage |
---|---|---|
Volume Mounts | Mounts a Docker volume into a container, managed by Docker. | docker run -v my_volume:/path/in/container my_image |
Bind Mounts | Mounts a specific directory or file from the host filesystem into a container. bypassing Docker’s storage system and granting direct access to host files | docker run -v /host/path:/container/path my_image |
Tmpfs Mounts | Creates a temporary filesystem in memory for a container. Data is lost when the container stops. | docker run --tmpfs /container/path:rw,size=100m my_image |
Option | Description | Usage |
---|---|---|
Read-Only | Mounts the volume or bind mount as read-only. | docker run -v my_volume:/path/in/container:ro my_image |
Read-Write | Allows both read and write access to the mounted volume or bind mount (default mode). | docker run -v my_volume:/path/in/container my_image |
Consistency | Specifies synchronization requirements (for certain drivers), such as consistent , cached , or delegated . |
docker run -v /host/path:/container/path:consistent my_image |
-
Docker's networking capabilities allow containers to communicate with each other and with external systems. Docker supports various network drivers and configurations to facilitate container connectivity.
-
In Docker, networking subsystem is pluggable, using drivers. Several drivers exist by default, and provide core networking functionality:
Network Driver | Description |
---|---|
Bridge | The default network driver. If no driver is specified, this type of network is created. Bridge networks are used when containers on the same host need to communicate with each other. |
Host | Removes network isolation between the container and the Docker host, allowing the container to use the host's networking directly. |
Overlay | Connects multiple Docker daemons and enables communication across nodes for Swarm services and containers. This eliminates the need for OS-level routing. |
IPvlan | Provides control over both IPv4 and IPv6 addressing. The VLAN driver extends this by offering layer 2 VLAN tagging and L3 routing for integration with underlay networks. |
Macvlan | Allows assignment of a MAC address to a container, making it appear as a physical device on the network. Useful for legacy applications that need direct network access. |
None | Completely isolates a container from the host and other containers. This driver is not available for Swarm services. A |
Docker Desktop for Mac allows you to easily run Docker and Kubernetes on your macOS system. It functions within a lightweight Linux VM, which means that while Docker commands will work as expected, only Linux-based Docker containers are supported. To install, search for "install Docker Desktop," download the installer, and follow the on-screen instructions. You can choose between stable and edge channels for feature updates. After installation, launch Docker Desktop from the Launchpad and run Docker commands in the terminal as usual. Note that the Docker client runs natively on macOS, but the Docker daemon operates within the Linux VM.
For 42 Student you can run the script ./init_docker.sh inside this repo https://github.com/alexandregv/42toolbox.git to use docker in goinfre!
To install Docker on a Linux system, follow these steps:
- Update your existing list of packages:
sudo apt update
- Install a few prerequisite packages which let apt use packages over HTTPS:
sudo apt install apt-transport-https ca-certificates curl software-properties-common
3.Add the GPG key for the official Docker repository to your system:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
4.Add the Docker repository to APT sources:
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
5.Update the package database with Docker packages from the newly added repo:
sudo apt update
6.Install Docker:
sudo apt install docker
Adding your domain name to the hosts file allows your local machine to resolve the domain to the specified IP address, which is useful for development and testing purposes. This ensures that when you enter the domain in your browser, it directs to your local server instead of the live site.
To add the domain name of the WordPress website to the hosts file, follow these steps:
- Open the hosts file in a text editor with sudo privileges:
sudo nano /etc/hosts
2.Add a new entry for your domain:
127.0.0.1 login.42.fr
3.Save and close file :D