Skip to content

Dockerizing VENFT APP Server

DeerajChakravarthy edited this page Apr 8, 2022 · 9 revisions

Introduction

Docker is a high-level virtualization platform that lets you package software as isolated units called containers. Containers are created from images that include everything needed to run the packaged workload, such as executable binaries and dependency libraries. Images are defined in Dockerfiles. These resemble sequential scripts that are executed to assemble an image.

Docker Architecture

Docker Architecture

**Dockerfiles **can include several kinds of instructions, such as RUN, to execute a command in the container’s file system, and **COPY **to add files from your host.

FROM ubuntu:latest COPY /source /destination RUN touch /example-demo

Three instructions are used here: FROM, which defines the base image from which yours will inherit; COPY, which lets you add files from your host to paths inside the container; and RUN, which is a pass-through for executing commands inside the container’s file system.

Steps

  • docker build -t venftappserver -f venftapp-server\Dockerfile .
  • -t – represents tag. Venftappserver is the tag in this case
  • -f – represents the docker file location
  • Run the following command after the image is created at step #4.
  • docker run -d -p 8080:80 --name DockerizedVENFTAppServer venftappserver
  • -p – represents port mapping and specifies what port on the host machine to map the port the app is listening on in the container. 8080 is exposed and will be used to access the application. This port is mapped to port 80 within the container.
  • --name – assigns a name to the container, DockerizedVENFTAppServer in this case.
  • -d, - runs container in background and prints containerID
  • Open the application using localhost:8080/swagger/index.html

Overriding Appsettings.json

To bind mount a volume to override Appsettings.json from host to the container execute the following command

docker run -d --name=DockerizedVENFTAppServerAppSettings -p 8080:80 -v /d/RSqr/CORUZANT/Temp/appsettings.json:/app/appsettings.json venftappserver

-v : Bind mount a volume <<file path on the host machine>>:<<file path on the container>>

Spinning up a docker instance after downloading docker image from IPFS

  1. Download the docker image from IPFS: https://ipfs.infura.io/ipfs/QmVbAZqz8xSrfcPaeJx7cJx3BW8p5jZ4784GF7NhhLLpxy

  2. Step #1 will download a file, download.tar

  3. Use the following command to load the image in your local docker host:

    docker load -i download..tar

  4. Now type the following command to see if the image was created.

    `docker images'

  5. The image should be listed with repository name as 'venftappserver' and Tag as 'Latest'

  6. Run the following command to create the docker instace

    docker run -d -p 8090:80 --name IPFSVENFTAppServer venftappserver

  7. Try to access the url, http://localhost:8090/swagger/index.html in a web browser of your choice

Best Practices

1. Scan for Security Vulnerabilities

Do: Perform active scans of your Docker images to identify key vulnerabilities. Several tools are available, including Docker Scan, which is built into the Docker CLI.

Why: Scans assess the operating system packages installed in the image and match them against known lists of Common Vulnerabilities and Exposures (CVEs). Suggested remediation steps are provided for each detected problem.

Benefit: Avoids issues slipping into production.

DEVOPS Note: Add a scan to the image build stage of your CI pipeline to help protect us from developers unwittingly adding risky packages to your image.

2. Remove Unneeded Dependencies/ Keep images small

Do: Keep Docker images as minimal as possible. Don’t install any packages or libraries that your application doesn’t actually use. As you’ll rarely interact with running containers yourself, there’s no point in adding common CLI utilities on the off chance you might want them later.

Why: Streamlining your builds to include just the bare essentials makes for smaller image sizes, faster builds, and a reduced attack surface. Less network bandwidth will be used when transferring images to registries and hosting providers.

Benefit: Maintaining this practice will help your Dockerfile stay focused on containerizing your application instead of an entire OS environment.

3. Secure Injected Credentials

Do: Configuration keys and secret sensitive data must be supplied to individual containers, not the images they’re built from. This is one of the most common and dangerous Dockerfile issues. It can be tempting to copy config files as part of your build. This is an anti-pattern that should be avoided.

Why: Anything copied as part of your Dockerfile is baked into your image and accessible to anyone who can access it. If a file is added containing a database password, we have exposed those secrets to all the users with access to your image registry.

Benefit/Solution: Using environment variables, config files mounted into a volume, or Docker’s dedicated secrets mechanism to inject data when containers start avoids accidental information exposure and ensures your images are reusable across environments.

4. Run as a Non-Root User

Do: Use the USER instruction directs Docker to run a container’s command as a specified non-root user. It accepts the name or ID of a user and group. All subsequent RUN, CMD, and ENTRYPOINT instructions in your Dockerfile will run as that account.

Why: Docker containers usually default to running their processes as the root user. Using root to run your processes means that a successful exploit of a web server inside your container could let an attacker take control of the container or even your host machine—the root user in the container is the same as the root on your host.

Benefit: For even greater isolation, you can run the Docker daemon as a non-root user. This uses namespace remapping to completely avoid the use of the root on the host. Even if a process breaks out of your container, it won’t be able to fully compromise your machine. The rootless mode requires a special configuration and does not work with all Docker features.

5. Track Dockerfile Versions

Do: Version control Docker files.

Why: Docker files that can change over time, so you should commit them to version control. Some organizations keep Dockerfiles in the same repository as their code, facilitating one git pull followed by a docker build to build the image, whereas others choose to use a separate repository. In the latter case, you’d need to pull the code repository separately before building your image.

Benefit: Using version control lets you track, revert, and review the changes you make to your Dockerfile. It will be a useful starting point to debug issues if you build a new image and then find that it fails when deployed.

6. Create Stateless, Reproducible Containers

Do: Docker containers should be ephemeral and completely stateless.

Why: Images need to define reproducible builds: the exact same image should be built each time you run docker build with a single Docker file. This is aided by specifying exact versions of the packages you install, as the latest, release, or even v1 could change significantly in the future.

Another trait of reproducible builds is the absence of side effects. Builds are meant to create containers, not to update data in your database, push artifacts into your CI system, or send you emails about progress. Those are all jobs to be performed in a separate stage in your pipeline.

Benefit: Heeding these practices ensures that the image you built through docker build in a week will be the same as the one you built through docker build today. This makes it much easier to back up your data. Docker registries can quickly become large, but properly built images don’t necessarily need to be archived. If you do lose your registry, you’ll be able to quickly rebuild replicas of your images from the Docker files in your source repositories.

7. Handle Multiline Arguments

Do: Combine as many commands as possible into a single RUN instruction

Why: It helps facilitate layer caching.

Benefit: It’s inevitable that some Dockerfile instructions will become long and unwieldy. This is especially true of RUN instructions that execute commands within the container.

Note: The drawback with multiline arguments is to have very long lines in your Dockerfile. To mitigate this issue, you should combine multiple lines with backslashes. This makes your Dockerfile easier to read so newcomers can quickly scan through the instructions. Docker suggests that you sort lines alphabetically; while this won’t be possible in all command sequences, it works well for lists of packages and other downloaded files.

Ex:

RUN apt-get update && \ apt-get install -y apache2 &&\ service apache2 restart

8. Know the Difference Between CMD and ENTRYPOINT

CMD and ENTRYPOINT are two instructions used to define the process that’s run in your container. ENTRYPOINT sets the process that’s executed when the container first starts. This defaults to a shell, /bin/sh. CMD provides the default arguments for the ENTRYPOINT process.

The following pair of instructions results in date +%A running when the container is created: ENTRYPOINT ["date"]CMD ["+%A"]

Setting a custom ENTRYPOINT lets image users quickly access your CLI binaries without having to specify their full path. When you use docker run, Docker overrides the CMD but reuses the ENTRYPOINT

docker run my-image:latest +%B

The above command would start the container and run date +%B

9. Use COPY Instead of ADD

Dockerfiles support two similar but subtly different instructions. ADD and COPY both let you add existing files to your image; whereas COPY only works with local files on your host, ADD also accepts remote URLs and automatically extracts tar archives. A simple ADD archive.tar has very different results from COPY archive.tar because the version using ADD will copy the contents of the archive, not the archive file itself.

Because ADD possesses this extra magic, it’s advisable to use COPY as your go-to when copying content from your file system. ADD creates unnecessary ambiguity, especially when referencing archive files, which means it should be reserved for use when it’s really needed. This helps you communicate your intentions to other contributors who might work with your Dockerfile.

10. Use CI/CD for testing and deployment

Do: Use Docker Hub or another CI/CD pipeline to automatically build and tag a Docker image and test it when a pull request is created.

Why: To ensure the image has been tested and signed off by, for instance, development, quality, and security teams before an image is deployed into production.

Sign images: Take this even further by requiring your development, testing, and security teams to sign images before they are deployed into production.

References

https://docs.docker.com/develop/dev-best-practices/

https://www.qovery.com/blog/best-practices-and-tips-for-writing-a-dockerfile

https://docs.docker.com/develop/develop-images/dockerfile_best-practices/

Clone this wiki locally