It's easy to setup a local development environment on your Windows or MacOS machine, but unfortunately, servers run on Linux systems. This difference in the operating system, among many other factors, is the reason behind the popular phrase "but it worked on my machine".
Just because your app works on your machine does not necessarily mean it will work on the production server machine. That's why we have Docker.
Docker creates containerized applications in a locked environment called a container, so that when you want to ship to production, you simply deploy a Docker container. As long as the production server machine can run Docker, it will be able to run our container.
Although creating a container is as simple as creating a Dockerfile with some CLI commands, there are some caveats along the way. In this article, I will discuss:
How to create a custom Docker container for our React app.
How to use Docker Compose to document and simplify our Docker commands.
How to use Docker volumes in order to reflect the changes made on our local machine to the app inside the container automatically without rebuilding the container from scratch on every code modification.
How to run tests inside the container
How to use Nginx as a production server instead of the local development server to serve the production version of our application.
How to create a CI pipeline that runs our tests whenever we accept and merge a pull request made to any branch other than the main branch.
Once this is done, we can easily deploy our application to cloud services like AWS manually or using an automated process like a CD pipeline. Whatever method we choose for deployment, what is common among them all is that development, testing and deployment are smooth and bug-free.
Step 1: Setup a React app
Install a React app using Vite by running: npm create vite@latest
and following the commands to setup a React with TypeScript app.
Setup testing with Jest and React Testing Library (RTL) by following this article.
Finally, in vite.config.ts
write these configurations:
export default defineConfig({
plugins: [react()],
server: {
host: "0.0.0.0",
},
});
By default, the React app runs on localhost
. In this tutorial, we will make the app run inside a container but access it outside the container on our host machine. For this reason, we need the app to be accessible on all ports, not just localhost. That's why we change server.host
in the defineConfig
options. For more info, you can refer to the docs here.
Step 2: Setup a Development Dockerfile
Now it's time to learn how we can run our app with a development server:
Create a Dockerfile.dev
in the root of your app directory as follows:
FROM node:alpine
WORKDIR /app
COPY ./package.json ./
RUN npm install
COPY ./ ./
CMD [ "npm", "run", "dev" ]
Realize that the default command of the container is defined by the CMD
in the Dockerfile.dev
file, and it is npm run dev
which starts a development server on the running machine, which is the docker container.
We can run different Docker commands to run our app inside a container, but I will use Docker Compose to automate the Docker commands and also to act as a documentation file for our project. Create docker-compose.yaml
and paste these configurations inside:
services:
react-app:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "5173:5173"
volumes:
- .:/app
- /app/node_modules
Docker Compose file breakdown:
services: refers to the containers we want to create. Any services (containers) created here will automatically have a network between them, which means you can communicate between the services. Here we have a single service (container) for the React app.
build.context
: where to look for the custom Dockerfile, which isDockerfile.dev
in this case. The build context is the root of the working directory.ports
: to do port mapping between the host and the container. The first port refers to the port on the host machine, and the second port refers to the mapped port in the container.volumes
: volumes are a way to persist data among containers restarts. Instead of copying the files from the host machine to the container, the container will make a reference to the files and folders on the local machine.The first line of the volume created in the Docker Compose file maps every file and folder in the root directory of the local machine to the working directory of the container.
The second volume means we are creating a bookmark, which means that
app/node_modules
should map to nothing on the local machine. The reason is because we already installed the node modules inside the container, and no need to map them to the local machine.The purpose of using Volumes is to reflect the changes we make on the local machine to the container. That way, we don't need to rebuild the app image and create a container out of it every time we make changes to the codebase.
Now it's time to learn how to run our tests inside the container. For this, we will create a new service inside the docker compose file as follows:
services:
tests:
build:
context: .
dockerfile: Dockerfile.dev
volumes:
- .:/home/node/app
- /home/node/app/node_modules
command: ["npm", "run", "test"]
It is very similar to the react-app
service except that we don't need to open ports in the container. Also we added the command
option to override the default command of the container created from Dockerfile.dev
, which is npm run dev
.
If we run docker compose up --build
we should see this output:
This means that the development server is running on port 5173, and the tests have passed successfully. You can go to localhost:5173
to see the development version of the React app. If you make any changes to the codebase, they will be reflected on the browser.
Step 3: Setup a Production Dockerfile
The development server that comes with Vite to run our React app is optimized for development, but it's not good for production because it will be very slow for that purpose. To run our app in production, we have to use the build version of the app by running npm run build
, which generates a folder called dist
that contains a very simple HTML file with one JavaScript file optimized for production. How to serve this HTML file in a production server? A common production server is Nginx, which is a very simple server whose aim is to serve static files.
Let's create a Docker file inspired by the snapshot taken from the docs:
FROM node:alpine AS builder
WORKDIR "/app"
COPY ./package.json ./
RUN npm install
COPY . .
RUN npm run build
FROM nginx:alpine
COPY --from=build /app/dist /usr/share/nginx/html
EXPOSE 80
Let's break down the Dockerfile:
FROM node:alpine AS builder
this creates a build step. This allows us to use multiple base images for our Dockerfile. The first step is calledbuilder
which builds the app and generates a folder calleddist
inside the container.Then we make a second step from the Nginx docker image. By default, the Nginx web server expects the build folder to be inside
/usr/share/nginx/html
as mentioned in the screenshot above. So, we have to copy the output of the builder step to this location. That's why we writeCOPY --from=build /app/dist /usr/share/nginx/html
.Finally, we have to expose port 80, which is the port that Nginx server listens to.
To interact with the production version of our app, we run these two commands:
docker build -t react-prod-app .
to build an image calledreact-prod-app
from Dockerfile.docker run -p 8080:80 react-prod-app
to create and start a container from thereact-prod-app
image.
Now hit localhost:8080
on your local machine and you should see your React app up and running in production mode.
Step 4: Build the CI Pipeline
Now the final step! We are going to use GitHub Actions to build a CI pipeline. The purpose of this pipeline is to automate running tests after opening a pull request and merging the changes on any branch to the main branch. We do this to make sure that the tests pass with all the features that other developers built.
In the app directory, create a file called docker-test.yaml
as shown in the picture below.
Inside the docker-test.yaml
configuration fiel, let's write the follwoing commands for the CI server:
name: Run tests on merged code before deployment
on:
push:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Login to Docker
run: docker login -u ${{ secrets.DOCKER_USERNAME }} -p ${{ secrets.DOCKER_PASSWORD }}
- name: Build Docker image
run: docker build -t markmaksi/react-app -f Dockerfile.dev .
- name: Run tests
run: docker run -e CI=true markmaksi/react-app npm run test
The first block is the name of the Action that will be running on the CI server.
The second block means that the Action will run once a push to main branch is made whether through a direct push or after a PR is merged.
The 3rd and final block of the file is the jobs that will be executed inside the Action. Here we have a single job called "build". Inside this job there are few steps, and in every step, we run some commands.
Define Docker secret and password
Go to Docker Hub, login and go to the account settings and you should see:
Go to Personal Access Tokens and create a pair of secret/password.
You see in the picture markmaksi
is the username and the password is dckr_pat_iraTTaZTrHna8GfrtmraZrJCIEs
.
Go to your GitHub repository associated with the codebase and go to the settings tab. You should see Secrets and variables
as shown below.
We have to add 2 Repository Secrets. One called DOCKER_USERNAME
and one is DOCKER_PASSWORD
. Fill the variables with the values you get from Docker Hub.
Once the secrets are set, all you have to do is add, commit and push your code to GitHub. If you go to the Actions tab you will see all the Actions, their jobs and even the logs coming from your container on every step.
You see here that the Action has sun successfully and all steps inside the Action workflow file were run successfully. If you like this article you can subscribe to My YouTube channel ๐ฅ๐ where I share similar interesting content.
Also, I would appreciate it if you buy me a pizza here to support my mission to create the best content on web app development for passionate engineers like you and me.