- How to Install and Set Up Docker on Ubuntu 18.04 - hostinger.com
- Why Developers Should Learn Docker and Kubernetes in 2023 - dev.to
- Dockerize Your Nextjs App Like a Pro: Advanced Tips for Next-Level Optimization - levelup.gitconnected.com
- The Art of Crafting Dockerfile - medium.com
- How Docker Containers Work – Explained for Beginners - freecodecamp.org
- Docker Compose Cheatsheet - devhints.io
- Docker CLI cheatsheet - devhints.io
- Setting Memory And CPU Limits In Docker - baeldung
Docker is a platform designed to make it easier to develop, deploy, and run applications by using containers. Containers allow a developer to package up an application with all parts it needs, such as libraries and other dependencies, and ship it all out as one package. This ensures that the application will run on any machine, regardless of any customized settings that machine might have that could differ from the machine used for writing and testing the code.
Key components of Docker include:-
Docker Daemon (dockerd): The Docker daemon runs on the host machine. It manages Docker objects like images, containers, networks, and volumes.
Docker Client (docker): The Docker client is the command-line tool that allows users to interact with the Docker daemon. It sends commands to the daemon, which then carries them out.
Docker Images: Images are the building blocks of containers. They are read-only templates containing the application code, libraries, dependencies, and other settings needed to run the application.
Docker Containers: Containers are instances of Docker images. They are lightweight, portable, and runnable environments that encapsulate an application and its dependencies.
Docker Registry: A Docker registry is a repository for Docker images. Docker Hub is a public registry where users can share and access pre-built images. Private registries can also be used for more secure image storage.
Docker Compose: Docker Compose is a tool for defining and running multi-container Docker applications. It allows you to specify a multi-container application setup in a single file and then spin up the entire environment with a single command.
Docker Swarm (Optional): Docker Swarm is a native clustering and orchestration solution for Docker. It allows you to create and manage a swarm of Docker nodes, turning them into a single, virtual Docker host.
- Next.js application works perfectly on localhost but not working on production server, after reload redirect to '/' route
- Node, MongoDB, PHP version not matching
HOST: {
HYPERVISOR(VMware/Virtual Box): {
1: [Mac]
2: [Windows]
3: [Linux]
}
}
HOST: {
DOCKER: {
[Container - 1][Container - 2][Container - 3];
}
}
The kernel is a computer program at the core of a computer's operating system and generally has complete control over everything in the system. It is the portion of the operating system code that is always resident in memory and facilitates interactions between hardware and software components.
# check docker installed or not
docker --version
# Update Packages
sudo apt update
# Install Prerequisite Packages
sudo apt-get install curl apt-transport-https ca-certificates software-properties-common
## Add Docker's GPG Key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
## Add Docker Repository
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
# Update Packages Again
sudo apt update
# Install Docker
sudo apt install docker-ce docker-ce-cli containerd.io
# Verify Docker Installation
sudo systemctl status docker
# Add `sudo` (optional)
sudo usermod -aG docker your_username
# Update Packages
sudo apt update # For Debian
sudo dnf update # For Fedora
# Install Docker
sudo apt install docker.io # For Debian
sudo dnf install docker # For CentOS/Fedora
# Start Docker and Enable Auto-Start (Optional)
sudo systemctl start docker
sudo systemctl enable docker
# Verify Docker Installation
sudo systemctl status docker
Note: Docker Compose is already installed with Docker
Aspect | Docker Image | Docker Container |
---|---|---|
Defination | Docker Image is static, immutable blueprint for an application. | Docker Container is a running environment for Docker Image also an instance of Docker Image. |
Content | Includes application code, dependencies, and settings. | Includes the runtime environment for executing an image. |
Composition | Images are composed of a series of read-only layers stacked on top of each other. | Containers are dynamic instances of Docker images that can be executed on a host machine. |
Immutability | Images are immutable, meaning once created, their content cannot be changed. | Containers have a read-write layer on top of the read-only image layers. |
Sharing/Distribution | Images can be easily shared and distributed across different environments and systems. | Containers provide process and filesystem isolation to prevent interference with the host system. |
Storage | Stored in a registry (e.g., Docker Hub). | Exists on the host machine or in a Docker Swarm cluster. |
Building Process | Images are created using a build process defined in a Dockerfile. | Containers can have runtime configurations, such as environment variables and network settings. |
Versioning | Docker images can be versioned using tags, allowing tracking and deployment of specific releases. | Containers have a lifecycle that includes creation, starting, stopping, pausing, and deletion. |
Storing Configuration | Configuration files needed for the application to run are included in the image. | Containers have runtime configurations, such as environment variables, network settings, and exposed ports. |
Application Code | The image includes the source code of the application. | Containers encapsulate the application code and dependencies, ensuring consistency across different environments. |
Dependencies | Dependencies, such as libraries and frameworks, are included in the image. | Containers include dependencies required for running the application. |
Usage | Used as a template to create containers. | Executed to run an application in an isolated environment. |
Note: In summary, a Docker image is a static, immutable template, while a Docker container is a dynamic, runnable instance created from an image. Images serve as blueprints, and containers are where applications run.
Aspect | Docker Containers | Virtual Machines (VMs) |
---|---|---|
Abstraction Level | Application-level virtualization. | Hardware-level virtualization. |
Resource Efficiency | Lightweight, shares OS kernel. | Heavier, includes full OS for each VM. |
Performance | Generally better performance. | Slightly higher overhead due to full OS. |
Isolation | Process and file system isolation. | Strong isolation with separate OS instances. |
Portability | Highly portable across environments. | Less portable due to OS and format issues. |
Startup Time | Faster startup times. | Slower startup times for VMs. |
Resource Duplication | Less duplication as containers share OS. | More duplication as each VM has its OS. |
Compatibility | Linux based docker image cannot compatible with Windows OS. Solution: Docker Toolbox |
VM of any OS can run on any OS host |
Use Case | Microservices, CI/CD, lightweight apps. | Multiple OS instances, legacy applications. |
Density | Higher container density on a host. | Lower VM density due to heavier resource use. |
Examples | Docker, Kubernetes. | VMware, VirtualBox, Hyper-V. |
Docker Command | Definition |
---|---|
docker --version |
Displays the Docker version installed on your system. |
docker info |
Provides detailed information about the Docker installation, including containers and images. |
docker pull [image] |
Downloads a Docker image from a registry. |
docker images |
Lists all locally available Docker images. |
docker ps |
Displays a list of running containers. |
docker ps -a |
Shows all containers, including stopped ones. |
docker run [options] [image] Example: docker run -p 3000:3050 -d --name [provide container_name] [image_name] |
Creates and starts a new container based on the specified image. |
docker exec [options] [container-id] [command] Example: docker exec -it [container] /bin/bash |
Runs a command inside a running container. |
docker stop [container-id] |
Stops a running container. |
docker start [container-id] |
Starts a stopped container. |
docker restart [container-id] |
Restarts a running or stopped container. |
docker rm [options] [container-id] |
Removes one or more stopped containers. |
docker rmi [options] [image-id] |
Deletes one or more Docker images. |
docker build [options] [path] docker build -t [image-name] . |
Builds a Docker image from a Dockerfile using path. |
docker network [options] |
Manages Docker networks, allowing containers to communicate with each other. |
docker volume [options] |
Manages Docker volumes, providing persistent storage for containers. |
docker logs [options] [container] |
Retrieves the logs from a running or stopped container. |
docker image prune |
remove unused Docker images |
docker system prune -af |
WARNING! This will remove unused Docker resources to free up disk space. such as:- - all stopped container - all networks not used by at least one container - all images without at least one container associated to them - all build cache |
Options:
-
Common Options:
-d, --detach
: Run containers in the background.--name
: Assign a name to the container.--rm
: Automatically remove the container when it exits.-it
: Interactive mode, often used with a terminal.
-
Container Management Options:
--restart
: Specify container restart policies.--entrypoint
: Override the default entry point.--env
: Set environment variables.-p, --publish
: Publish container ports to the host also knwon as Port Binding.
-
Image and Build Options:
-t, --tag
: Name and optionally a tag in the 'name:tag' format.--file
: Specify the name of the Dockerfile.--no-cache
: Do not use the cache when building the image.
-
Compose Options:
--scale
: Set the number of containers to run for a service.--volumes
: Mount host paths or named volumes.
-
Network Options:
create
: Create a network.connect
: Connect a container to a network.inspect
: Display detailed information about a network.
-
Volume Options:
create
: Create a volume.inspect
: Display detailed information about a volume.rm
: Remove one or more volumes.
-
Logging Options:
--tail
: Number of lines to show from the end of the logs.--follow
: Follow log output.--timestamps
: Show timestamps in log output.
These are some commonly used options, and there are many more available for specific use cases. You can refer to the official Docker documentation for a comprehensive list and detailed explanations of options.
Docker Compose is a tool for defining and running multi-container Docker applications. It allows you to define a set of services, networks, and volumes in a YAML file, and then use that file to create and manage all the components of your application as a single unit.
Note: Docker Compose is already installed with Docker
version: "3.7" # The Docker Compose file format version refers to the version of the specification used to define the structure and syntax of the Docker Compose file.
# https://docs.docker.com/compose/compose-file/compose-versioning/#compatibility-matrix
services:
admin_dashboard: # container name [--name]
image: nurmdrafi/admin-dashboard:dev # this container will run this image
restart: unless-stopped # Restart policy: always, unless-stopped, on-failure, etc.
environment: # Environment Variables
- NODE_ENV=development
- NEXT_PUBLIC_STAGING_BASE_URL=
- NEXT_PUBLIC_PRODUCTION_BASE_URL=
ports: # Port Binding
- 3050:3000 # (host:container)
monogodb:
image: mongo
restart: unless-stopped
environment:
- MONGO-INITDB_ROOT_USERNAME=admin
- MONGO-INITDB_ROOT_PASSWORD=password
ports:
- 27017:27017
mongo-express:
image: mongo-express
environment:
- ME_CONFIG_MONGODB_ADMINUSERNAME=admin
- ME_CONFIG_MONGODB_ADMINPASSWORD=password
- ME_CONFIG_MONGODB_SERVER=mongodb
ports:
- 8080:8080
# host port: The port on the host machine where you want to map the container's service. This is the port you can use to access the service from outside the host machine.
# container port: The port inside the container where the service is running.
# network: Docker Compose take care of creating a common Network
Docker Compose Command | Description |
---|---|
docker-compose up |
Creates and starts containers based on the services in the docker-compose.yml . |
docker-compose up -d |
Starts containers in the background (detached mode). |
docker-compose down |
Stops and removes containers, networks, and volumes. |
docker-compose ps |
Lists containers associated with the Docker Compose project and their status. |
docker-compose logs [service] |
Displays logs of the specified service or all services. |
docker-compose exec [service] [cmd] |
Executes a command in a running container of the specified service. |
docker-compose build |
Builds or rebuilds Docker images defined in docker-compose.yml . |
docker-compose config |
Validates and outputs the composed configuration. |
docker-compose pull |
Pulls the latest images for services defined in docker-compose.yml . |
docker-compose pause |
Pauses all services. |
docker-compose unpause |
Unpauses all services. |
docker-compose scale [service=num] |
Scales a service to the specified number of containers. |
docker-compose down -v |
Stops and removes containers, networks, volumes, including named volumes. |
A Dockerfile is a plain text configuration file used by Docker to build a Docker image. It contains a set of instructions that specify how to assemble a Docker image. Each instruction in the Dockerfile creates a layer in the image, and these layers are stacked to form the final image.
Key conponents of a Dockerfile include:
-
Base Image: Specifies the base image from which the Docker image will be built. This is typically an existing image from the Docker Hub or another registry. The Dockerfile typically starts with a base image.
-
Working Directory: Defines the working directory inside the container where subsequent commands will be executed.
-
Copy Files: Copies files from the host machine to the container. This is often used to include application code, configuration files, or other necessary assets.
-
Run Commands: Executes commands within the container during the build process. These commands are often used to configure the environment, install software, and perform other setup tasks.
-
User: Sets the user that will run the subsequent commands.
-
Environment Setup: Sets up the environment inside the container, which may include installing packages, setting environment variables, and configuring the system.
-
Expose Ports: Specifies which network ports the container will listen on at runtime.
-
Entrypoint The entrypoint.sh script inside a Docker image typically serves as the entry point for the container when it starts. It is a shell script that is executed when the container is launched. The purpose of using an entrypoint script is to perform any necessary setup or configuration before the main command is run.
Here are common tasks that an entrypoint.sh script might handle:
1. Setting Environment Variables: The script can set environment variables needed by the application.
2. Configuration: Configuration files or settings required by the application can be generated or modified.
3. Database Migrations: If the application uses a database, the script might handle database migrations or other setup tasks.
4. Running Additional Processes: Additional processes or services required by the application can be started.
5. Handling Signals: The script can trap signals (such as SIGTERM) to gracefully shut down the application when the container is stopped.
6. Dynamic Configuration: The script might generate configuration files dynamically based on the container's environment or other factors.
7. Any Pre-Processing: Any other tasks that need to be performed before starting the main application.
Example:
#!/bin/sh # Check that required environment variables are set echo "Check that we have NEXT_PUBLIC_API_URL vars" test -n "$NEXT_PUBLIC_STAGING_BASE_URL" test -n "$NEXT_PUBLIC_PRODUCTION_BASE_URL" # Replace placeholder strings in files with corresponding environment variable values echo "Replacing placeholder strings in files" find /app/.next \( -type d -name .git -prune \) -o -type f -print0 | xargs -0 sed -i "s#NEXT_PUBLIC_STAGING_BASE_URL#$NEXT_PUBLIC_STAGING_BASE_URL#g" find /app/.next \( -type d -name .git -prune \) -o -type f -print0 | xargs -0 sed -i "s#NEXT_PUBLIC_PRODUCTION_BASE_URL#$NEXT_PUBLIC_PRODUCTION_BASE_URL#g" # Start the Next.js application echo "Starting Nextjs" exec "$@"
-
CMD: CMD in a Dockerfile are used to specify the command that should be executed when a container starts.
- Just read comments
# Stage 1: Install dependencies
# Base image
FROM node:18-alpine AS deps
# Install project dependencies
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
# Set working directory
WORKDIR /app
# Copy from to
# First path represents current project directory
# Second path represents current docker image system directory
COPY package.json package-lock.json .
# Install Packages
RUN npm ci
# Stage 2: Rebuilds the source code, sets environment variables, and runs the production build
FROM node:18-alpine AS builder
WORKDIR /app
# Copy everything
COPY . .
# Copy node_modules from previous layer
COPY --from=deps /app/node_modules ./node_modules
# Set environment variables
ENV NEXT_PUBLIC_STAGING_BASE_URL=NEXT_PUBLIC_STAGING_BASE_URL
ENV NEXT_PUBLIC_PRODUCTION_BASE_URL=NEXT_PUBLIC_PRODUCTION_BASE_URL
ENV NEXT_PUBLIC_STAGING_CNL_BASE_URL=NEXT_PUBLIC_STAGING_CNL_BASE_URL
ENV NEXT_PUBLIC_PRODUCTION_CNL_BASE_URL=NEXT_PUBLIC_PRODUCTION_CNL_BASE_URL
ENV NEXT_PUBLIC_MAP_API_ACCESS_TOKEN=NEXT_PUBLIC_MAP_API_ACCESS_TOKEN
ENV NEXT_PUBLIC_VERIFY_API_KEY=NEXT_PUBLIC_VERIFY_API_KEY
ENV NEXT_PUBLIC_SOCKET_API_KEY=NEXT_PUBLIC_SOCKET_API_KEY
ENV NEXT_PUBLIC_SOCKET_APP_SECRET=NEXT_PUBLIC_SOCKET_APP_SECRET
ENV NEXT_PUBLIC_SOCKET_APP_ID=NEXT_PUBLIC_SOCKET_APP_ID
ENV NEXT_PUBLIC_SOCKET_CLUSTER=NEXT_PUBLIC_SOCKET_CLUSTER
ENV NEXT_PUBLIC_WS_HOST=NEXT_PUBLIC_WS_HOST
ENV NEXT_PUBLIC_WS_PORT=NEXT_PUBLIC_WS_PORT
ENV NEXT_PUBLIC_WSS_PORT=NEXT_PUBLIC_WSS_PORT
ENV NEXT_PUBLIC_SOCKET_AUTH_ENDPOINT=NEXT_PUBLIC_SOCKET_AUTH_ENDPOINT
# Run production build
RUN npm run build
# Stage 3: Creates the production image, sets permissions, user, exposes ports, disables telemetry, and defines the entrypoint and default command to run the application.
FROM node:18-alpine AS runner
WORKDIR /app
# Set environment (production / development)
ENV NODE_ENV=production
# Copy required files from previous layer
COPY --from=builder /app/public ./public
COPY --from=builder /app/.next ./.next
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./package.json
COPY --from=builder /app/entrypoint.sh ./entrypoint.sh
# Set permissions
RUN chmod +x /app/entrypoint.sh
# Create a group with GID 1001 named 'nodejs'
RUN addgroup -g 1001 -S nodejs
# Create a user with UID 1001 named 'nextjs' and add to the 'nodejs' group
RUN adduser -S nextjs -u 1001
# Set ownership of the /app/.next directory recursively to 'nextjs:nodejs'
RUN chown -R nextjs:nodejs /app/.next
# Switch to the 'nextjs' user
USER nextjs
# Define container port
EXPOSE 3000
# Disable the telemetry collection in Next.js
RUN npx next telemetry disable
# Define the entrypoint with the default command to run the application
ENTRYPOINT ["/app/entrypoint.sh"]
# Set the default command to run the application
CMD npm run start
next.config.ts
// Next Js Config
const nextConfig = {
output: 'standalone',
}
module.exports = nextConfig
Dockerfile
# Stage 1: Install dependencies
FROM node:18-alpine AS deps
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json package-lock.json .
RUN npm ci
# Stage 2: Build the project
FROM node:18-alpine AS builder
WORKDIR /app
COPY . .
COPY --from=deps /app/node_modules ./node_modules
ENV NEXT_PUBLIC_PRODUCTION_BASE_URL=NEXT_PUBLIC_PRODUCTION_BASE_URL
ENV NEXT_PUBLIC_STAGING_BASE_URL=NEXT_PUBLIC_STAGING_BASE_URL
ENV NEXT_PUBLIC_MAP_API_ACCESS_TOKEN=NEXT_PUBLIC_MAP_API_ACCESS_TOKEN
ENV NEXT_TELEMETRY_DISABLED=1
ENV SKIP_HUSKY=1
RUN npm run build
# Stage 3: Serve the project
FROM node:18-alpine AS runner
WORKDIR /app
COPY --from=builder /app/public ./public
COPY --from=builder /app/.next/standalone .
COPY --from=builder /app/.next/static ./.next/static
COPY --from=builder /app/entrypoint.sh .
RUN chmod +x /app/entrypoint.sh
RUN addgroup -S nodejs -g 1001
RUN adduser -S nextjs -u 1001
RUN chown -R nextjs:nodejs /app/.next
USER nextjs
EXPOSE 3000
ENTRYPOINT ["/app/entrypoint.sh"]
CMD ["node", "server.js"]
# Stage 1: Setup Node.js with NVM ===>
FROM ubuntu:24.10 AS base
# Set working directory
WORKDIR /app
# Install dependencies
RUN apt-get update && \
apt-get install -y curl
# Install NVM
RUN curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.1/install.sh | bash
# Install Node.js
RUN bash -c "source ~/.nvm/nvm.sh \
&& nvm install 22.0.0 \
&& nvm use 22.0.0 \
&& nvm alias default 22.0.0"
# Make Node.js available globally by copying it to the system-wide path
ENV NODE_VERSION="22.0.0"
ENV NVM_DIR="/root/.nvm"
ENV PATH="$NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH"
# Stage 2: Install dependencies ===>
FROM ubuntu:24.10 AS deps
# Set working directory
WORKDIR /app
# Copy NVM from the base stage
COPY --from=base /root/.nvm /root/.nvm
# Set Node.js environment
ENV NVM_DIR="/root/.nvm"
ENV NODE_VERSION="22.0.0"
ENV PATH="$NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH"
# Copy package.json
COPY package*.json ./
# Install npm packages
RUN npm cache clean --force \
&& npm ci --only=production --force
# Stage 3: Build the project ===>
FROM ubuntu:24.10 AS builder
# Set working directory
WORKDIR /app
# Copy NVM from the base stage
COPY --from=base /root/.nvm /root/.nvm
# Set Node.js environment
ENV NVM_DIR="/root/.nvm"
ENV NODE_VERSION="22.0.0"
ENV PATH="$NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH"
# Copy necessary files
COPY ./ ./
COPY --from=deps /app/node_modules ./node_modules
# Set environment variables
ENV NODE_ENV=production
ENV NEXT_PUBLIC_BASE_URL=NEXT_PUBLIC_BASE_URL
ENV NEXT_PUBLIC_MAP_API_ACCESS_TOKEN=NEXT_PUBLIC_MAP_API_ACCESS_TOKEN
ENV NEXT_PUBLIC_TRACE_BASE_URL=NEXT_PUBLIC_TRACE_BASE_URL
ENV NEXT_PUBLIC_TRACE_API_ACCESS_TOKEN=NEXT_PUBLIC_TRACE_API_ACCESS_TOKEN
ENV SKIP_HUSKY=1
ENV NEXT_TELEMETRY_DISABLED=1
# Build the project
RUN npm run build
# Stage 4: Serve the project ===>
FROM ubuntu:24.10 AS runner
# Set working directory
WORKDIR /app
# Copy NVM and Node.js binaries for non-root user
COPY --from=base /root/.nvm /home/nextjs/.nvm
# Set Node.js environment for non-root user
ENV NODE_VERSION="22.0.0"
ENV NVM_DIR="/home/nextjs/.nvm"
ENV PATH="$NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH"
# Copy necessary files from the builder stage
COPY --from=builder /app/public ./public
COPY --from=builder /app/.next/standalone ./
COPY --from=builder /app/.next/static ./.next/static
COPY --from=builder /app/entrypoint.sh ./
# Ensure entrypoint.sh is executable
RUN chmod +x /app/entrypoint.sh
# Create a non-root user and set ownership
RUN groupadd -g 1001 nodejs \
&& useradd -u 1001 -g nodejs -m -s /bin/bash nextjs \
&& chown -R nextjs:nodejs /app
# Ensure Node.js is accessible for the non-root user
RUN bash -c "chown -R nextjs:nodejs /home/nextjs/.nvm"
# Switch to non-root user
USER nextjs
# Expose port
EXPOSE 3000
# Start the server
ENTRYPOINT ["/app/entrypoint.sh"]
CMD ["node", "server.js"]
docker-compose.yaml for local
version: '3.7'
services:
jti-dashboard:
image: nurmdrafi/jti-dashboard:main
container_name: jti-dashboard
build:
context: .
dockerfile: Dockerfile
env_file:
- .env.local
ports:
- "3000:3000"
docker-compose-server.yaml for server
version: '3.7'
services:
jti-dashboard:
image: nurmdrafi/jti-dashboard:main
container_name: jti-dashboard
environment:
- NEXT_PUBLIC_PRODUCTION_BASE_URL=NEXT_PUBLIC_PRODUCTION_BASE_URL
- NEXT_PUBLIC_STAGING_BASE_URL=NEXT_PUBLIC_STAGING_BASE_URL
- NEXT_PUBLIC_MAP_API_ACCESS_TOKEN=NEXT_PUBLIC_MAP_API_ACCESS_TOKEN
ports:
- "3090:3000"
Note:
- Finding available port
netstat -tulpn | grep 3091
(if no result > this port is available)
Joss writing!!!
Suggestions,
netstat -tulpn | grep 3091
herenetstat
is deprecated from ubuntu 20.04, usess
from now on,ss
works just likenetstat
.