Docker
Containers from scratch
In the past, companies used a bare metal approach to hosting servers online, where they owned all the infrastructure and machines. Here were the downsides:
- Expensive
- Not scalable
- Sometimes your machines broke down
Now companies moved away from bare metal and now do virtual machines, where they run multiple OSs on one machine, called virtualization. This allows you to run multiple servers in parallel on a single computer.
While you don’t have to manage the infrastructure yourself, there are downsides to VMs:
- You have to manage and update all the software yourself
- Your have to install everything yourself.
- People on the same VM can launch hack attacks against each other
containers
Containers solve this problem. All you have to do is tell what software you want to run and download in a container, and it will do it for you.
Containers running on VMs are also more secure than running on a VM itself.
chroot
We can start making containers from scratch by running this docker command to enter the interactive shell on an ubuntu OS:
docker run -it --name docker-host --rm --privileged ubuntu:jammy
We can then see what version of the OS we are on by logging the /etc/issue file that all ubuntu machines have:
cat /etc/issue
You can create a a new folder and then run the chroot command to make that folder the root:
chroot <new-folder>
Docker Basics
The absolute basics
One thing to keep in mind is this concept of layer caching. Docker will attempt to avoid repeating as many steps as possible, so if we install dependencies before adding the code to copy our application code, the install dependencies step will be cached.
- No matter how much we change our application code, since the install step ran beforehand, it will be unaffected in the cache.
- The only time we have to reinstall dependencies is when the contents of our
package.jsonchanges.
# install node lts version on alpine linux distro, which is only 5mb in size
FROM node:lts-alpine
# create app folder in our docker container, which will contain all application code
# /app acts as root.
WORKDIR /app
# copy package.json for install caching. Only reinstall if the file changes.
COPY package.json ./
# install all dependencies but omit dev dependencies
RUN npm install --only=production
# copy every single folder and file from root directory into the workdir, app.
COPY . .
RUN npm run build --prefix client
# prevent hackers from hacking into our container and running commands as root
USER node
# run this command when the container starts, in exec form (safer)
CMD ["npm", "start", "--prefix", "server"]
# set environment variable
ENV PORT=8000
# expose port 8000 to the outside world
EXPOSE 8000
ENVsets environment variables that will be available during the container processRUNruns shell commands.EXPOSEjust serves as documentation for the port that the container process will be running on, but is overrided by port forwardingCMDis the main command you want to run. This is the actual command that starts the docker file. There can only be one of these per Dockerfile, and for good reason.
NOTE
The CMD command will not be executed when an image is built, but rather when a container is built from the image and spins up.
Dockerignore
To avoid copying over or including files and folders you don’t want, docker has a similar concept to git called a .dockerignore file. You’ll often put node_modules and .git in there.
node_modules
.git
Building docker images
The docker build command looks for a dockerfile and builds a docker image from that.
docker build .: builds a docker image by finding the dockerfile in the current directory and building a docker image from thatdocker build . -t <IMAGE_NAME>:<TAG>: builds a docker image from the dockerfile in the current directory and allows you to name the image and specify a tag for a version.
When deciding with OS docker image to build off on, think, “what can I do to capture this moment and make sure it doesn’t break in the future?”
Using the --tag option or the shorthand -t option, you can name and tag your docker builds like so:
docker build --tag <image-name>:<tag-number> .
- The
image-namewill be the name of your image, and thetag-numberwill be the specific docker tag versioning to fetch a specific container version. - The tag number doesn’t really have to be a number, but it’s recommended. In fact, it’s “latest” by default.
Running docker images
To run a specific docker image, we need to refer the image by its name, which you should have specified when you created the docker image
docker run -it -p <LOCAL_PORT>:<CONTAINER_PORT> <IMAGE_NAME>
The docker run command runs the container based on the instructions in the dockerfile. You can then use the docker ps command to look at all your currently running containers
- The
-pcommand specifies which port mapping behavior we define for running the docker image.- In the docker image, we expose a port for public use. Our application code running in the docker image will use that to create a server.
- The
-psyntax is like-p <LOCAL_PORT>:<CONTAINER_PORT>, so if we do a mapping of3000:8000, it shows that we want to map the container’s exposed 8000 port to our localhost:3000. - Keep in mind, it only makes sense to map publicly exposed ports in our container.
- The
-itcommand puts you in an interactive container, meaning it gives you some visual feedback.
DockerFile
Variables
environment variables
You can specify environment variables with the ENV command and then provide a key value pair, like so:

args
The ARGS command in docker allows you to define variables in your dockerfile and then use template string interpolation with ${...} to access those variables.

Making secure containers
When creating the docker file, remember that the default user is root.
If you run a docker container with docker run and then override the main command with whoami, you’ll the see that the root user is active.
In the below example, we take advtange of the node user already supplied on node images, as we use that user as the owner of all our app resources.
FROM node:18-alpine
# security practice: run as non-root user, prevent root user access
# runs as the node user
USER node
# this becomes the new root folder for the rest of the commands
WORKDIR /home/node/app
COPY index.js index.js
CMD ["node", "index.js"]
FROM <image>: starts our container by building off another docker image from dockerhub. Here we use node 18 running on alpine linuxUSER <user>: switches to the specified user, often done to prevent root accessWORKDIR <path>: creates the specified path and cd’s into it. The current working directory becomes that path in the container. It becomes the “root folder” for the rest of all the commands in the docker file.CMD <command>: defines the main command for a container. The command is written in exec mode.
You can also make custom users like so:
FROM node:18-alpine
# 1. create new user called new_user
RUN useradd -ms /bin/bash new_user
# 2. security practice: run as non-root user, prevent root user access
USER new_user
# 3. this becomes the new root folder for the rest of the commands
WORKDIR /home/node/app
COPY index.js index.js
CMD ["node", "index.js"]
We want the policy of least power for best security practices, so we’ll change the user:
- The node image gives us another user called
nodewith limited privileges. Switch to this user using theUSER nodecommand. - Supply cli-specific commands like
COPYwith--chown=node:nodeas the first argument to make the user the owner or executor of the action.
The --chown command is redundant here because all commands after you set the user with USER will be run by the current user. It's good for just being explicit.
COPY index.js index.js
Examples
example 1: secure vite app
This first example shows how to correctly use user and user groups for maximum security within a container process. You can pretty much just copy this from project to project:
FROM node:20-alpine
# 1. for security, run as non-root user
RUN addgroup app && adduser -S -G app app
USER app
# 2. set the working directory to /app
WORKDIR /app
# 3. copy package jsons for docker cache
COPY package*.json ./
# 3a. deal with weird ownership issues
USER root
RUN chown -R app:app .
USER app
# 4. install dependencies
RUN npm install
# 5. copy the rest of the files to the working directory
COPY . .
# 6. expose port
EXPOSE 5173
# 7. run app
ENTRYPOINT npm run dev
example 2: node server
For the docker image to handle dependencies in a node app, this is how the dockerfile should be:
FROM node:18-alpine
USER node
# we need to own this folder so we can run npm install in it
RUN mkdir /home/node/app
WORKDIR /home/node/app
# copy package and package lock json for layer caching
COPY package*.json ./
# install all dependencies with npm ci
RUN npm ci
COPY . .
ENV PORT=3000
CMD ["npm", "start"]
The most important part here is dependencies. Before copying over any files, we follow these steps:
- Put
node_modulesin the.dockerignore - Copy over the package and package-lock JSON files.
- Run the
npm cicommand for a clean install, which looks towards the more accuratepackage-lock.jsonto install libraries. It also automatically deletesnode_modulesbefore reinstalling.
Doing the install early on ensures container caching and faster rebuilding as long as we don’t modify the contents of the package JSON.
Running containers
Naming containers
You can name containers while running them to ensure easy ways to reference them later on. You do this through the --name tag.
Supplying environment variables
You can supply pairs of environment variables when running a container with the --env flag. This is useful for dynamically supplying environment variables instead of statically establishing them in the Dockerfile.
The basic syntax for using the flag is --env KEY=VALUE or you can use the shorthand -e.
docker run --env PORT=3000 --env APP_NAME=docker <container>
Automatically deleting containers
You can automatically delete a container after you stop its process with the --rm flag. This is useful to avoid extra steps for cleanup afterwards.
docker run --rm <container>
detached vs interactive mode
Containers by default run in detached mode, meaning that they run in daemon process in the background. If that's not what you want, then you can switch to interactive mode, which is a blocking process in the CLI and shows the container output.
Here are the two options for explicit specifying detached or interactive mode:
-d: running a container withdocker run -d <image>runs it in detached mode.-it: running a container withdocker run -it <image>runs it in interactive mode.
You can also switch between detached and interactive mode with these CLI commands:
docker attach <container>: reattaches to the process of a detached containerdocker detach <container>: detaches from the process of a running container
interactive shell mode
If you want to write some commands to do some testing from within the container's environment, then you have to go into interactive shell mode, which you can do like so:

Running commands on a container
The basic syntax for running a docker container and overriding the CMD with your own command is as follows:
docker run <image> <cmd>
docker run -it <image> <cmd> # for interactive
docker run --rm <image> <cmd> # for cleanup after process exit
You specify the image to run, and then an optional command at the end called the cmd, which is the main command a docker image runs as soon as it spins up.
NOTE
Every container has a main command specified by CMD, and passing in your own command as the last argument of docker run <container> will override that main command and run it.
- So
docker run alpine:3.10 lsspins up alpine linux v3.10, and then runs thelscommand in the root directory instead of just dropping you into a linux shell.
If you want to inspect a container and execute commands on it while its already running another process, thats when you use the docker exec command, to avoid overriding the CMD.
docker exec <container> <command>
Copying files to containers
You can copy files and folders into and out from a running docker container using the docker cp command.
- The basic usage is
docker cp <src> <destination>, where the src and destination are filepaths either located on your local machine or on the container. - To refer to a remote container path, the syntax is
<container-name>:<path>. For example, the pathboring_guy:/datarefers to the/datafolder in theboring_guycontainer.
docker cp <src> <destination>
# copy everything in the data folder to the fluffy waffle container
# at the /app/data path
docker cp data/. fluffy_waffle:/app/data
TIP
Doing this is a poor man's version of a bind mount, so just use that instead.
CLI reference
docker build
docker build --file=<filename>: points to a specific dockerfile for building an image. Useful if you have different dockerfiles for prod and dev.docker build --no-cache: builds a docker image but without caching any layers or fetching from the cache.docker build --target=<stage-name>: builds a docker image from the specified stage in the dockerfile, which is useful for debugging or dev vs prod purposes.
Containers
container lifecycle
docker stop <container-id-or-name>: stops the specified containerdocker start <container-id-or-name>: starts the specified stopped containerdocker rm <container-id-or-name>: deletes the specified container if its stoppeddocker rm -f <the-container-id>: stops and force deletes the specified containerdocker container prune: deletes all unused containers
container info
docker logs <container-id-or-name>: views the logs of the specified containerdocker container ls: lists all containersdocker ps -a: lists all containersdocker ps -q: lists all curently running containers
Images
docker image ls: lists all imagesdocker image prune: deletes all dangling imagesdocker rm <image>: deletes the imagedocker image inspect <image:tag>: inspects the specified image and its version
The container registry
Volumes and Bind mounts
You can think of both bind mounts and volumes as a way of maintaining state and persisting data throughout container builds and runs.
- Bind mounts are you exposing a folder from your file system to a container through a symbolic link, meaning changes to that folder will also be reflected in the container.
- Volumes are creating a folder on your container that also writes to local data on your filesystem, abstracted under Docker Desktop.
NOTE
Bind mounts are more performant since they're a direct portal to your filesystem, but that also makes them less secure. Volumes are a higher abstraction and are more secure.

Let's also learn some basic terminology:
- ephemeral: an adjective used for referring to data or systems that that maintain persistent state. An example is container by default - as soon as you delete them, they lose everything that ever happened in it.
- snowflake server: A snowflake server is when you have a server that uses ephemeral data sources or services, making losing all your data a real possibility.
- If your server crashes and you were running an ephemeral, local data store like SQLite on it, then you lose all your data and the database resets back to the original state before the server crash.
Volumes vs bind mounts: final verdict
There is one key difference to understand between bind mounts and volumes: the key difference from bind mounts is that Docker is in control of the volume's location and lifecycle on the host.
- bind mounts: container gains complete unobscured access to the mounted folder from your local filesystem
- volumes: Docker controls where the volume's data lives on your local host filesystem, which is obscured from the user. Has more control over that data.
IMPORTANT
They key here is this: bind mounts are file systems managed the host. They're just normal files in your host being mounted into a container. Volumes are different because they're a new file system that Docker manages that are mounted into your container. These Docker-managed file systems are not visible to the host system (they can be found but it's designed not to be.)
(Yes, I know I said "key" a lot).
You specify a bind mount or volume you want to attach to your container with the --mount option, and they have two different syntaxes:
- Named volume:
type=volume,src=my-volume,target=/usr/local/data - Bind mount:
type=bind,src=/path/to/data,target=/usr/local/data
When to Choose Volumes
If it would be annoying to have container processes reflect changes on your local filesystem, it's better to use volumes.
- For databases and other stateful applications.
- When data portability and ease of backup/migration are important.
- When you want Docker to manage the data's lifecycle and location.
- When security is a significant concern, as volumes offer better isolation.
When to Choose Bind Mounts
Basically only use case is for copying over dotfiles and developing in a container.
- During local development for iterating on code or configurations quickly.
- When you need to provide existing configuration files from the host.
- When performance in a development context outweighs the benefits of volume management.
Bind Mounts
They are like portals to your host computer, providing a container direct access to your filesystem. Whatever code you change in a mounted folder will be immediately reflected in the container without having to rebuild.
Here is the basic syntax for specifying a bind mount:
docker run --mount type=bind,source=$LOCAL_DIR_SOURCE,target=$TARGET_CONTAINER_DIR
You specify a mount with the --mount option, and then when passing in the mount options, you specify a bind mount with the type=bind option. In fact, there are three key value pairs you have to care about:
source: Path on the host machine to create a bind mount fromtarget: Path inside the container where the host source path should be mountedtype: Set tobindfor bind mounts.consistency: Optional.cachedcan improve performance.
Here is a complete example:
docker run --mount \\ type=bind,source=/path/to/your/host/directory,target=/path/in/container your_image_name
docker run --mount type=bind,source="$(pwd)"/build,target=/usr/share/nginx/html
-p 8080:80 nginx
A shorthand syntax for creating a bind mount is to use the -v option, which is less explicit. The syntax is like so, where you specify the local filepath to the container filepath mapping:
docker run -v <local-filepath>:<container-filepath> <your_image_name>
bind mount security
Because of the security risk with bind mounts, it is crucial that you give the container access to folders mindfully and with unprivileged permissions. For example, you can make bind mounts readonly in the mount option, preventing the container from changing your mounted content remotely:
-
This is how you do it with the
--mountoption:docker run --mount \\
type=bind,source=/localpath,target=/containerpath,readonly your_image_name -
This is how you do it with the
-voption:docker run -v \\
/path/to/your/host/directory:/path/in/container:ro your_image_name
Volumes
Volumes are a way of saving a container file on your local filesystem thereby having the changes persist across container runs. The actual local path of where the container data lives is abstracted away, as opposed to bind mounts.
Here are the key characteristics of volumes in docker:
- Docker-Managed: Docker handles the creation, location, and lifecycle of volumes.
- Persistent Data: Data stored in volumes persists even after the container that created or used it is stopped or removed.
- Portability: Volumes are more portable than bind mounts. Because Docker manages the volume's location, you can move a volume between hosts more easily (though this requires specific tools or methods).
- Abstraction: Volumes provide an abstraction layer over the host's filesystem, making them less dependent on the host's specific directory structure.
- Data Isolation: Data in volumes is typically stored in an area managed by Docker, making it more isolated from the host's general filesystem.
- Backup and Migration: Docker provides commands and APIs for backing up and migrating volumes.
Two major use cases for using docker volumes are for a persistent database and for sharing volumes between containers. Even when the container is deleted, the volume isn't - persistence achieved.
There are two ways to use volumes:
method 1) unnamed volume mount
The below command using -v and only supplying the path to the directory you want to volume mount will automatically create an anonymous volume in Docker.
docker run -v /app/data your_image_name
WARNING
However, be warned - the volume name will be some random obfuscated id.
method 2) named volume mount
-
Create a volume with the
docker volume create <volume-name>command -
Run the container with the volume attached using the command. Use the
--mountflag, and then specify these three keys: type, src, target.type=volume: specifies that we are mounting a volumesrc=<volume-name>: the volume to mounttarget=<path>: where in the container filesystem to mount the volume
docker run -dp 127.0.0.1:3000:3000 --mount
type=volume,src=todo-db,target=/etc/todos getting-started
You can also use the -v flag, which is the shorthand for mounting, and in this case, volume mounting:
docker run -v <volume-name>:<container_path> <image_name>
docker volume reference
Here is a reference for the volumes commands:
docker volume ls: lists all volumesdocker volume create <volume_name>: creates a volume with the specified namedocker volume inspect <volume_name>: gets detailed info about the specified volume.docker volume rm <volume_name>: deletes the specified volume.docker volume rm <volume_name>: deletes the specified volume.
volume examples
The below example runs a postgres container and automatically creates a volume called postgres_data.
docker run -d --name my-postgres \
-e POSTGRES_PASSWORD=mysecretpassword \
--mount type=volume,source=postgres_data,target=/var/lib/postgresql/data \
postgres:latest
Networks
Networks are a useful way of connecting containers to each other via a shared network without needlessly exposing ports running SQL or mongo to the real world.
NOTE
The main benefit is that they enable containers to talk to each other without exposing their ports directly to the host or the outside world.
Here are 4 benefits of working with docker networks in multi-container applications:
- Container Communication: Enable containers to talk to each other without exposing their ports directly to the host or the outside world.
- Isolation: Network isolation prevents containers from accessing networks or resources they shouldn't have access to.
- Service Discovery: Allows containers to find and connect to other containers by their name or alias within the same network.
- Portability: Networking configurations are defined as part of the container or service definition, making them more portable.
creating networks
You can create networks with the docker network create command, also specifying the driver:
# create the network
docker network create --driver=bridge <network-name>
The --driver flag is used to specify what server you want managing the network. If you want the network to be available on your local machine but not accessible by host ports, then Docker Desktop provides the bridge default.
In fact, there are 4 special drivers you can specify:
bridge: the default driver managed by Docker Desktophost: runs the network on the local machine, but not hiding the ports from the host machine. Has full access to the host network, with no network isolationnone: no connection to the network, no external network connectivity.
running stuff on networks
You can then start running something on the network with docker run, but by specifying in which network you want to run the container with the --network flag:
# start the mongodb server, expose port 27017 from container to local machine
docker run -d --network=app-net -p 5432:27017 --name=db --rm mongo:7
Let’s dissect the above command:
--network=app-net: runs the container in the network-p 5432:27017: port forwards the container’s 27017 port to the host machine port on 5432.
Since you can run multiple containers on the same network, all those containers have exposable ports they can each interact with. Here are the steps to understand working with networks in docker:
- You list each port a container runs on with the
EXPOSE <PORT_NUMBER>command in a dockerfile. - When you run multiple different containers in the same network, they can all access each other through the exposed container ports, which are not accessible by host machine
- You can make individual containers accessible to the host by using port forwarding with
-p <HOST_PORT>:<CONTAINER_PORT>.
# 1. Create the network
docker network create my_custom_bridge
# 1. run the database in the network
docker run -d --name db_container --network my_custom_bridge postgres:latest
# 2. run the app in the network so it can access db
#. also port forward it so host machine can access.
docker run -d --name app_container --network my_custom_bridge -p 8080:3000 \\
my_app_image
Here's an english high-level equivalent of what we did in the above example:
- In the above example, we'll first create a network.
- Then we'll create a postgres container in that network that runs on port 5432 in the network.
- Then we'll create a container for our nodejs server that runs in the network, and have it run on port 3000. In our application code, it can access the postgres database through port 5432 because both containers run in the same network.
- Then we can do port forwarding to expose our app's container port 3000 to the host machine port of 8080 to see our app on
http://localhost:8080
network reference
docker network ls: List all Docker networksdocker network inspect <network-name>: Get detailed information about a specific network, including connected containers, subnet information, and gatewaydocker network create --driver <network-name>: Create a new custom networkdocker network rm <network-name>: Remove a custom network (only if no containers are connected)docker network connect <network-name>: Connect an existing container to a networkdocker network disconnect <network-name>: Disconnect a container from a network
Advanced container configuration
Alpine linux
Alpine linux is a very small, barebones distribution of linux sitting around 5mb, which makes it not only ideal for production but also secure since it follows the least-power privelege
apk is the package manager we use for alpine linux, and we use apk add <package> to add a package.
Reducing size
Alpine linux with node makes the final container size around 80mb, which will still small, we can reduce to around 55mb following these steps:
-
Build from alpine OS base image
- The
apk add --update <package>command first updates the apk package manager, and then installs packages.
FROM alpine:3.10
RUN apk add --update nodejs npm - The
-
Add a user called node
RUN addgroup -S node && adduser -S node -G node
USER node
The final code looks like this:
FROM alpine:3.10
RUN apk add --update nodejs npm
RUN addgroup -S node && adduser -S node -G node
USER node
RUN mkdir /home/node/code
WORKDIR /home/node/code
COPY package-lock.json package.json ./
RUN npm ci
COPY . .
CMD ["node", "index.js"]
TIP
Alpine is the production image - as coined by the industry - because it's so small, but in development, use a larger image that has all the linux utilities, like debian.
Here is another example of a fullstack node alpine app:
# syntax=docker/dockerfile:1
# Comments are provided throughout this file to help you get started.
# If you need more help, visit the Dockerfile reference guide at
# https://docs.docker.com/go/dockerfile-reference/
# Want to help us make this template better? Share your feedback here: https://forms.gle/ybq9Krt8jtBL3iCk7
ARG NODE_VERSION=24.0.2
FROM node:${NODE_VERSION}-alpine
# Use development for build to install devDependencies
ENV NODE_ENV=development
ENV USING_DOCKER=true
ENV USING_SERVER=true
# install bash
RUN apk add --no-cache bash
# Set working directory for all build stages.
RUN mkdir -p /usr/src/app
RUN chown -R node:node /usr/src/app
WORKDIR /usr/src/app
# Copy and install server dependencies
COPY package.json package-lock.json ./
RUN npm ci
# Copy and install frontend dependencies (including devDependencies for build)
COPY frontend/package.json frontend/package-lock.json ./frontend/
RUN cd frontend && npm install
# Copy the rest of the source code (excluding node_modules via .dockerignore)
COPY . .
# Build the frontend
RUN cd frontend && npm run build
# Change to production environment for runtime
ENV NODE_ENV=production
# Expose the port that the application listens on.
EXPOSE 5000
# Switch to node user for runtime
USER node
# Run the application.
CMD ["npm", "start"]
[!DANGER] A really really important thing to note here is that when working with vite, it skips installing dev dependencies if you set
NODE_ENV=production, so be careful. That is what leads to bugs.
Multistage builds
We can use a concept called multistage builds, where each stage uses a different base image.
Each stage makes its own container, and then throws away the container at the end of the stage. Only the base image that is not a stage is what is actually built, but what makes this so useful is that we can use the COPY --from=<stage-name> option to hook into any stage's filesystem and copy over its files.
The basic syntax is like so:
FROM <base_image> as <stage_name>: creates a stage, pulling from the base image and naming it. Theaskeyword is what makes a container a stage instead of a normal container.COPY --from=<stage_name>hooks into the specified stage's filesystem and allows you to copy files from there into the current stage or final container.
FROM <base_image> as <stage_name>
mkdir /build
WORKDIR /build
# ... install dependencies, copy project files
FROM <production_base_image>
WORKDIR /production_code
# copy dependency files and source code into final container
COPY /build .
NOTE
The main use case for multi-stage builds would be to use a large image with lots of pre-installed tools as a stage, install dependencies, and then throw that away. Then you would use a small production container as your non-stage final container to just copy over all dependencies from the builder stage without having to install dependency management tools like npm, resulting in a smaller final image.
For example, we could use a larger node:20 image as the build stage to install our dependencies, throw that away, and then use a smaller alpine image to copy over the node_modules and source code from the builder stage without having to install npm and then run npm install in the alpine container. We essentially get to cut out npm for free, resulting in a ~10mb decrease in image size.
# build stage: install dependencies
FROM node:12-stretch as builder
WORKDIR /build
COPY package-lock.json package.json ./
RUN npm ci
COPY . .
# runtime stage. Everything above is thrown away, but the files still remain
FROM alpine:3.10
# 1. add node
RUN apk add --update nodejs
RUN addgroup -S node && adduser -S node -G node
USER node
# 2. create work directory
RUN mkdir /home/node/code
WORKDIR /home/node/code
# 3. copy files from builder stage into current directory
COPY /build .
CMD ["node", "index.js"]
Here is an example in depth, where we install npm dependencies and copy files over in a primary build stage, and then move on to using alpine in a runtime stage to just run the files with node.
- build stage: We create a build stage pulling from the massive
node:20image, call it "node-builder". - build stage: We then create a folder, cd into it, install dependencies, and then copy over all project files
- runtime stage: we use the tiny alpine linux image, install nodejs and omit npm to make the container size smaller.
- runtime stage: we create a new user for security reasons, create a working directory and cd into it.
- runtime stage: copy over all files from the
/buildfolder from the build stage. We can now run our code.
This approach where we install all dependencies and copy our source code into the container in the build stage is useful, because then we can just "delete" npm afterwards in the next container stage by never installing it in the runtime stage.
# build stage
FROM node:20 AS node-builder
RUN mkdir /build
WORKDIR /build
COPY package-lock.json package.json ./
RUN npm ci
COPY . .
# runtime stage
FROM alpine:3.19
RUN apk add --update nodejs
RUN addgroup -S node && adduser -S node -G node
USER node
RUN mkdir /home/node/code
WORKDIR /home/node/code
# copy over all files from /build folder from node-builder container
COPY /build .
CMD ["node", "index.js"]
final example
This is the final example, where I show how to correctly use USER and chown to build secure, least-privilege containers:
But first, we need to talk about when to run as a root and when to run as a user:
- root: stay as the root user when you need to install things and create directories
- user: change to a user when copying files to a folder you own, and when running commands from within a folder that the user owns.
The key thing that we do in the below code is first make the /build folder (which needs root permissions), and then change ownership of the folder to the node user and group with the chown <user>:<group> <dirname> command, which then lets us copy files to that folder and run commands as we please.
After setting the current user going forward with USER and then cd'ing into a folder that the current user owns, you can now modify that folder (running commands, copying files) anyway you please without having to specify the --chown option.
FROM alpine:3.21 AS builder
# Install Node.js and npm
RUN apk add --update nodejs npm
# Create non-root user and group
RUN addgroup -S node && adduser -S node -G node
# Create directory and set ownership (do this as root)
RUN mkdir /build && chown node:node /build
# Switch to non-root user after setup is complete
USER node
WORKDIR /build
# Copy package files with correct ownership
COPY package*.json ./
# Install dependencies as non-root user
RUN npm install
# Copy remaining files with correct ownership
COPY . ./
# Build the application
RUN npm run build
# Use nginx for serving static files
FROM nginx:alpine
# Copy only the built files to nginx
COPY /build/dist /usr/share/nginx/html
Distroless containers
debian-slim might be a better option than alpine because alpine has a strange bug where it replaces the glibc linux library with musl, which may cause bugs in your code, especially in kubernetes.
All distroless containers are based off debian-slim and you can use them like so:
# build stage
FROM node:20 AS node-builder
WORKDIR /build
COPY package-lock.json package.json ./
RUN npm ci
COPY . .
# runtime stage
FROM gcr.io/distroless/nodejs20
COPY /build /app
WORKDIR /app
CMD ["index.js"]
Dev containers
Dev containers allow you to launch your VSCode workspace using a dockerfile or other images so you don't have to install things locally on your end. They allow for seamless development in collaboration.
Under the hood, VS code uses bind mounts to take your current project, copy it into the container, and run a clean install with dependencies, but the dev container workflow with VS code is much better.
Here are some use cases:
- Ensuring everyone has the same VSCode extensions installed in the workspace
- Ensuring everyone has the same VSCode settings enabled in the workspace
- Ensuring everyone has the same binaries and images installed, like being able to use Deno, Bun, or FFMpeg.
Creating dev containers
All dev container configuration will live inside a .devcontainer folder, specifically pointing to a .devcontainer/devcontainer.json. Here is a basic example of the devcontainer.json:
{
"name": "first dev container",
"dockerFile": "Dockerfile",
"remoteEnv": { "NODE_ENV": "development" },
"build": {
"options": ["--platform=linux/amd64"]
},
"features": {
"ghcr.io/devcontainers/features/common-utils:2": {
"installZsh": "true",
"username": "node",
"upgradePackages": "true"
}
},
"customizations": {
"vscode": {
"extensions": [
"dbaeumer.vscode-eslint",
"sdras.fortnite-vscode-theme",
"esbenp.prettier-vscode"
],
"settings": {
"workbench.colorTheme": "Fortnite",
"terminal.integrated.shell.linux": "/bin/bash"
}
}
},
"postCreateCommand": "npm install"
}
Here are the important keys:
name: the dev container’s namedockerFile: the path to the docker file that accompanies this.- Only use this key if you're not using a base image for the devcontainer with the
imagekey.
- Only use this key if you're not using a base image for the devcontainer with the
image: the base image name to pull from docker hub.- Only use this key if you're not using a custom Dockerfile with the
dockerFilekey.
- Only use this key if you're not using a custom Dockerfile with the
appPort: the ports on the docker container to exposeforwardPorts: a list of exposed ports from the container to map to the host machine ports.postCreateCommand: the command to run after the dockerfile in the devcontainer finishes building.- think doing something like installing dependencies with
npm installhere
- think doing something like installing dependencies with
postStartCommand: the command to run after the complete dock- think doing something like starting a process with
npm run devhere
- think doing something like starting a process with
customization.vscode: any customization settings to apply for the new environmentsettings: any vscode settings to apply once inside the devcontainer environmentextensions: any vscode extensions to install once inside the devcontainer environment
features: a list of additional tools to install (useful if not using custom dockerfile to install those things)build.options: a list of command-line options to pass when building the container, such as--platformto specify which paltform the container should be made for.
Now let's dive deep into the heart of building a dev container: choosing the image to base it off of. There are three ways to do so, but you can only choose one per dev container.
- prebuilt image: Use an image with the
imagekey likenode:24or a special microsoft image specifically for dev containers. - custom dockerfile: point to the custom image you want to use through the
"dockerFile"key. - docker compose: point to the compose yaml file you want with through the
"dockerComposeFile"key, but you must also specify the name of the service to spin up through the"service"key.
Rebuilding dev containers
Whenever you make a change to your devcontainer.json or to the Dockerfile it points to, you should rebuild and reopen the devcontainer through the command palette.
Variables in Dev containers
Globally available variables supplied by VSCode as well as environment variables from your host machine can be referenced and interpolated with ${VARIABLE_NAME} syntax.
Environment Variables
In your devcontainer, you can reference environment variables using the same template string interpolation syntax, but you have to specify which environment variables you want to read from: your local env vars or the ones from your base image/dockerfile:
${localEnv:VARIABLE_NAME}: reads the value ofVARIABLE_NAMEfrom your local environment variables${containerEnv:VARIABLE_NAME}: reads the value ofVARIABLE_NAMEfrom the environment variables set in the base image or dockerfileor docker compose file
Built-in variables
Here are some built in variables you can reference:
${localWorkspaceFolder}: the local absolute filepath on your host machine that represents the current workspace.${containerWorkspaceFolder}: the remote absolute filepath on the container that corresponds to the workspace in the container (mounted from your local one).
Adding Bind mounts
You can use bind mounts to add in stuff like dotfiles, aliases, or other important files from your local laptop into the container. It essentially creates a link, where modifying the same file in either the devcontainer or on your local machine actually changes the file in both environments, so it's not just a simple copy - it's a view to a folder.
{
"mounts": ["source=${localWorkspaceFolder}/data,target=/container/data,type=bind,consistency=cached"]
}
In the "mounts" key, you provide a list of bind mounts you would like to mount in the devcontainer environment, and each bind mount follows a certain syntax of 4 key-value pairs you need to provide to describe the bind mount behavior:
source: Path on the host machine.${localWorkspaceFolder}is a variable representing your project folder.target: Path inside the container where the host path should be mounted.type: Set tobindfor bind mounts.consistency: Optional.cachedcan improve performance.
Dev container CLI
The devcontainer CLI tool offers ways to control devcontainers for your workspace from the CLI:
devcontainer open [path] Open a dev container in VS Code
devcontainer up Create and run dev container
devcontainer set-up Set up an existing container as a dev container
devcontainer build [path] Build a dev container image
devcontainer run-user-commands Run user commands
devcontainer read-configuration Read configuration
devcontainer outdated Show current and available versions
devcontainer upgrade Upgrade lockfile
devcontainer features Features commands
devcontainer templates Templates commands
devcontainer exec <cmd> [args..] Execute a command on a running dev container
Docker Compose
Intro, compose vs kubernetes
Docker compose is useful for development purposes where you need multiple processes/containers talking to each other, like a server, frontend, and database.
- Docker compose handles multiple container interactions with one host
- Kubernetes handles multiple container interactions with multiple hosts, so it’s good for scaling
Compose yaml
Docker compose easily handles networking between containers, meaning exposing database and server ports is as easy as telling which port each container should expose.
It networks different services together, which are the containers docker compose builds for you, which you can specify from an image off of dockerhub or a local Dockerfile from which to build the image.
All docker compose behavior is written in the docker-compose.yml file:
version: "3"
services:
web:
build: . # build image from folder path to dockerfile
ports: # export and map port 3000
- "${LOCAL_PORT}:${CONTAINER_PORT}"
volumes:
# mount everything in this folder into the container at /home/node/app path
- .:/home/node/app
# you need this to persist node modules
- /home/node/app/node_modules
links: # create network connection to the db container
- db # db is dependency. Wait until it is built
# define any environment variables
environment:
- MONGO_URI="mongodb://db:27017"
- CONTAINER_PORT=3000
- LOCAL_PORT=3000
db:
image: mongo:latest
ports:
- "27017:27017"
The services key specifies the different containers to build as subkeys. Here we are making a web container from our dockerfile and and a db container from the mongo image from dockerhub.
For each service, you can define the behavior of how to manage that container. Here are the most important keys
build: the folder in which theDockerfileis located. Use this if building a container from a dockerfile.ports: the port forwarding list, in port string form where it’s"<local-port>:<container-port>"volumes: any volumes you want to attachimage: use this key to specify a dockerhub image to build from. You cannot use this ifbuildis already specified.links: this key establishes network connections to other containers via docker neetworks. It also specifies dependency, meaning that thewebcontainer will not run until thedbcontainer builds firstenvironment: defines any environment variables.command: overrides theCMDof the container. This should be an array of strings, each string representing a single word.
Reading environment variables
You can define key value pairs either from the environment key or get environment variables from an .env file with the env_file key. You can use both at the same time.
services:
app:
environment:
- NODE_ENV=development
- DATABASE_URL=postgresql://user:password@db:5432/mydatabase
- API_KEY=${MY_API_KEY} # Read from host environment variable
env_file: # Load environment variables from a file
- .env
Volumes and bind mounts
You can specify volumes to use in docker compose, and the main advantage of docker compose with volumes is that you can create volumes on the fly, share volumes between services, and attach them easily.
Here are the three ways you refer to bind mount and volumes with docker compose:
- Named Volumes:
volume_name:/path/in/container - Bind Mounts:
/path/on/host:/path/in/container - Bind Mount with Options:
/path/on/host:/path/in/container:ro(read-only)
services:
db:
volumes:
- db_data:/var/lib/postgresql/data # Use a named volume
app:
volumes:
- ./app:/usr/src/app # Bind mount local source code
- /var/log/app_logs:/app/logs # Bind mount host directory for logs
depends_on
You can specify that a service needs another service to run before it before starting using the depends_on key. The main use case for this is a web app that needs the database to be up and running first.
services:
app:
depends_on:
- db
web:
depends_on:
- app
restart policy
the restart key configures the container's restart policy, which controls the restarting behavior of a container after it gets killed. Here are the different values you can pass:
no: Do not automatically restart.on-failure: Restart only if the container exits with a non-zero exit code.always: Always restart, even if the container exits cleanly.unless-stopped: Always restart unless the container is stopped manually.
services:
app:
command: ["sh", "exit"]
restart: unless-stopped
Docker compose CLI
You run docker compose with docker compose up, which finds the yaml file and runs the specifications.
- The first time you run docker compose, the images will be built or fetched to create the containers, but after that those images are completely cached.
- To rebuild the images so you can get fresh containers, use the
--buildflag indocker compose up --build.
Here's the compose reference:
docker compose up: builds and starts the containersdocker compose down: stops all running containersdocker compose ps: lists all containers belonging to the current compose projectdocker compose logs: shows the logs for all the running compose services
You also have commands to run individual services instead of doing all at once, a great use case for testing out services instead of doing the whole shebang.
docker compose run <service_name>: runs the individual servicedocker compose run <service_name> <command>: runs the individual service and overrides theCMDwith the specified command.docker compose ps <service_name>: provides detailed info about the specified service.


Watch mode with docker compose
We can create a docker compose yaml file that has a new watch mode, specifying two actions to take with folderpaths or filepaths in our code:
sync: uses bind mounts to make sure we don't have to rebuild our containers. Whatever folder we point to for syncing, it gets bind mounted.rebuild: whatever file(s) this action points to, it will rebuild the service automatically when those containers change.
And here are the steps we can take:
- Have a
developkey, which specifies what actions to take when certain files in your codebase changes, like rebuilding when the package json changes and then using volumes to sync changes for the rest of the front end code. - Run the
docker compose watchcommand.
# specify the version of docker-compose
version: "3.8"
# define the services/containers to be run
services:
# define the frontend service
# we can use any name for the service.
frontend:
# we use depends_on to specify that service depends on another service
# in this case, we specify that the web depends on the api service
build: .
ports:
- 5173:5173
# specify the environment variables for the web service
# these environment variables will be available inside the container
environment:
VITE_API_URL: <http://localhost:8000>
# volumes:
# - .:/app
# - /app/node_modules
command: npm run dev --host
# this is for docker compose watch mode
# anything mentioned under develop will be watched for changes by docker compose watch
# and it will perform the action mentioned
develop:
# we specify the files to watch for changes
watch:
# it'll watch for changes in package.json and package-lock.json
#cand rebuild the container if there are any changes
- path: ./package.json
action: rebuild
- path: ./package-lock.json
action: rebuild
# needs target as well (container filesystem)
- path: ./
target: /app
action: sync
Compose examples
Server with postgres
The first step is to create the docker file for the server
FROM denoland/deno:2.3.3
# Prefer not to run as root.
USER deno
WORKDIR /app
COPY deno.lock ./
# Copy the rest of the source files into the image.
COPY . .
# Run the application.
CMD ["deno", "run", "-A", "main.ts"]
The second step is to create the compose yaml:
- We fetch environment variables from the
.envkey, as specified by theenv_filekey. We read thePORTenvironment variable and use that in our port forwarding scheme to make the server accessible through localhost. - We only start the server service once the
dbservice starts, specified bydepends_on - We use secrets by specifying a global
secretskey, and creating a secret that reads from a text file. We then refer to the secrets we want to use specifically for the service through the service specificsecretskey. - We create volumes using the global
volumeskey, and then we can specify a<volume>:<container-path>mapping on the specific servicevolumeskey.
services:
server:
build:
context: .
# environment:
# - PORT=3000
env_file:
- .env
ports:
- "${PORT}:${PORT}"
# The commented out section below is an example of how to define a PostgreSQL
# database that your application can use. `depends_on` tells Docker Compose to
# start the database before your application. The `db-data` volume persists the
# database data between container restarts. The `db-password` secret is used
# to set the database password. You must create `db/password.txt` and add
# a password of your choosing to it before running `docker-compose up`.
depends_on:
db:
condition: service_healthy
db:
image: postgres
restart: always
user: postgres
secrets:
- db-password
volumes:
- db-data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=example
- POSTGRES_PASSWORD_FILE=/run/secrets/db-password
- POSTGRES_USER=postgres
expose:
- 5432
healthcheck:
test: ["CMD", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
volumes:
db-data:
secrets:
db-password:
file: db/password.txt
The third step is to correctly access those environment variables through our code:
import postgres from "https://deno.land/x/postgresjs/mod.js";
// const constants = new Constants();
const sql = postgres({
host: Deno.env.get("PGHOST") || "localhost",
port: Deno.env.get("PGPORT") ? parseInt(Deno.env.get("PGPORT")!) : 5432,
user: Deno.env.get("PGUSER") || "postgres",
password: Deno.env.get("PGPASSWORD") || undefined,
database: Deno.env.get("PGDATABASE") || "example",
});
await sql`CREATE TABLE IF NOT EXISTS users (
id SERIAL PRIMARY KEY,
name TEXT NOT NULL,
email TEXT NOT NULL UNIQUE
)`;