A container is an isolated process running on a server with Docker installed. It makes use of kernel namespaces and cgroups, which have long been features in Linux. Docker just makes it easier to access and use these features.
To put it in simpler terms, each application you run is contained within its own container and is independent of other containers.
The basic syntax to create a container running the nginx application is:
docker run --name my-nginx nginx
Here, --name
is the flag for naming the container, my-nginx
is the desired name of the container, and nginx
is the name of the image to be used to run the container.
To view the list of running containers, use the command:
docker ps
To create a container, you need a file called an image.
An image includes the file system of the container, so it must contain everything necessary to run an application, such as dependencies, configurations, and system files.
Imagine an image to run a node.js application—it would contain the operating system's file system, an integrated node.js runtime, and your project directory.
Images also include other configurations, such as environment variables and default commands to run when the image starts.
To create an image, you use a Dockerfile. A Dockerfile is a text file that contains instructions to build an image with your application.
Some basic commands for working with images include:
# view the list of images on your machine
docker image ls
# pull an image from a registry
docker pull <image_name>:<tag>
# push an image to a registry
docker push <image_name>:<tag>
Images in Docker are tagged with versions. For example, an image like nginx
can have tags like latest
, 1
, 1.19
, 1.19.10
, where the tags are labeled according to their versions. By default, if a specific tag is not specified, Docker will use the latest
tag.
Since Docker containers are isolated, if you want them to communicate with each other over the network, you need to use networks. Networks in Docker act as an internet-like connection between containers.
By default, Docker provides several basic network types, such as bridge
, host
, overlay
, macvlan
, none
, and other plugins. Three noteworthy types include:
Bridge
is the default network used by containers when running an application without specifying any other network type. The bridge network allows containers to communicate with each other if they are part of the same network. However, if a container is running with the bridge network and does not expose its ports to the host, you won't be able to access those ports using the host's IP. For example, if you run an nginx application with the command:docker run --name nginx nginx
The default nginx image runs on port 80. Suppose the IP of your host is 1.2.3.4. If you try to access http://1.2.3.4:80, you won't receive a response from nginx because the container is running on the bridge network and the port 80 is not exposed to the Docker host. Now, let's modify the command to expose the port:
docker run --name nginx --port 8080:80 nginx
The --port 8080:80
syntax exposes the nginx container's port 80 to port 8080 on the host, so now you can access http://1.2.3.4:8080 and receive a response from nginx.
Host
network allows the container to share the network with the Docker host. This means that if a container running an nginx image is on port 80, it will also be available on port 80 of the host.docker run --name nginx --network host nginx
The port 80 of nginx is now exposed on port 80 of the host because when you specify the host
network, it shares the network with the host. Now, you can access http://1.2.3.4:80 and receive a response from nginx.
Overlay
network allows containers to communicate with each other when deployed in swarm mode. In swarm mode, where multiple Docker hosts are connected, containers are distributed across hosts. To connect the containers, we need the overlay network.A registry is a repository for Docker images. Apart from Docker's own registry, Docker Hub, there are many other registries provided by different vendors. You can even set up your own registry.
Using a registry allows you to store and distribute your applications to others or simply serve as a personal repository within your organization.
A Dockerfile is a text file that contains instructions to build an image.
There are several commands available in Dockerfile to help you build an image. You can refer to them at here.
For example, here is a Dockerfile to build a node.js app:
FROM node:12
ENV NODE_ENV=production
WORKDIR /app
COPY ["package.json", "package-lock.json*", "./"]
RUN npm install --production
COPY . .
CMD [ "node", "server.js" ]
Then, in the directory containing the Dockerfile, build an image named my-node-app
:
docker build -t my-node-app .
docker-compose is a tool that allows you to run an application consisting of multiple containers. It is written in a .yaml file, and with one command, you can start all the configured services.
Imagine your node.js application requires a MySQL server to run. With docker-compose, you can start both the node.js server and the MySQL server at the same time.
For example:
version: "3.9"
services:
node:
image: my-node-app
ports:
- "3000:3000"
depends_on:
- mysql
mysql:
image: mysql
environment:
- MYSQL_USER=hoaitx
- MYSQL_PASSWORD=password
Then, in the directory containing the docker-compose.yaml file, run the following command:
docker-compose up -d
Note: To use the docker-compose command, you need to install it separately because it is not included by default in the Docker installation. You can refer to the installation guide at here.
Don't worry if you don't fully understand the content of the yaml file. There are many commands available, but you only need to understand the meanings of some basic commands to start using it.
Similar to Dockerfile, docker-compose has a vast range of commands. You can refer to them at here. I will write an article on docker-compose and introduce some commonly used commands.
Docker Swarm is a mode that allows multiple Docker hosts to connect together to create a scalable and fault-tolerant environment.
In Docker Swarm, a host can act as a manager or a worker. The manager is responsible for managing worker nodes, while the workers are responsible for running containers.
Docker Swarm is essential for production deployments because it creates a reliable environment.
These are some basic components of Docker and common terms you may encounter when using Docker. Don't worry if you don't understand everything yet, as I will cover each concept in more detail in the next article.
Me & the desire to "play with words"
Have you tried writing? And then failed or not satisfied? At 2coffee.dev we have had a hard time with writing. Don't be discouraged, because now we have a way to help you. Click to become a member now!
Subscribe to receive new article notifications
Hello, my name is Hoai - a developer who tells stories through writing ✍️ and creating products 🚀. With many years of programming experience, I have contributed to various products that bring value to users at my workplace as well as to myself. My hobbies include reading, writing, and researching... I created this blog with the mission of delivering quality articles to the readers of 2coffee.dev.Follow me through these channels LinkedIn, Facebook, Instagram, Telegram.
Comments (0)