Docker Compose Nginx Proxy Setup Guide for 2026
Docker Compose Nginx Proxy Setup Guide for 2026
Setting up an Nginx proxy with Docker Compose is one of the most practical ways to publish multiple web applications on the same server in a clean, manageable structure. If you run WordPress, a Node.js API, a Laravel dashboard, a static website, or a small admin tool on a single VPS, you do not need to expose a separate port for each one. Instead, you can use Nginx as the front door. A user visits yourdomain.com, Nginx receives the request, and then forwards it to the correct container behind the scenes. That is why this is not just a matter of “writing a proxy rule.” It is a small but important architectural decision that should be considered together with domains, SSL, container networking, logs, security, and maintenance habits.
In the simplest terms, an Nginx reverse proxy distributes incoming HTTP and HTTPS requests from the outside world to services running internally. Docker Compose lets you define these services in a single file. This keeps details such as where the Nginx container runs, which port the application listens on, whether both containers share the same network, and where volumes are mounted inside docker-compose.yml instead of scattering them across separate terminal commands. If you are still getting comfortable with the Linux side, terminal basics, service logic, and file permissions will make this setup much easier. The Beginner's Guide to Learning Linux Commands is a useful starting point.
In this guide, we will use a classic and easy-to-follow setup: one Nginx container will run as a reverse proxy, a sample web application will run behind it, and only ports 80 and 443 will be exposed to the outside world. The logic remains the same in 2026: do not expose unnecessary ports to the internet, place containers on the same Docker network, handle domain-based routing with Nginx, and plan SSL certificates so they can be renewed automatically. If you use Ubuntu or Debian as your server, first make sure the system is up to date and that Docker and the Docker Compose plugin are installed. After a fresh Ubuntu installation, the Ubuntu 26.04 LTS post-installation checklist can work as a basic maintenance checklist before you begin.
Keep the example folder structure simple. On the server, you can create a directory such as /opt/nginx-proxy and place docker-compose.yml, nginx.conf, and a few Certbot folders inside it. In production, putting everything randomly under the root directory makes future maintenance harder. Application-based folders are much easier to manage. For example, /opt/nginx-proxy/nginx/conf.d can hold site configurations, /opt/nginx-proxy/certbot/www can hold Let’s Encrypt validation files, and /opt/nginx-proxy/certbot/conf can hold certificates.
Your initial docker-compose.yml file can look like this:
`yaml services: nginx: image: nginx:stable-alpine container_name: nginx_proxy restart: unless-stopped ports:
- "80:80"
- "443:443" volumes:
- ./nginx/conf.d:/etc/nginx/conf.d:ro
- ./certbot/www:/var/www/certbot:ro
- ./certbot/conf:/etc/letsencrypt:ro networks:
-
proxy
app: image: nginx:stable-alpine container_name: demo_app restart: unless-stopped volumes:
- ./app:/usr/share/nginx/html:ro networks:
- proxy
networks: proxy: driver: bridge `
Do not let the appearance of two separate Nginx services confuse you. The first one is the proxy facing the outside world. The second one only represents the sample web application. In a real project, this app service could be a Node.js container, a Laravel application running with PHP-FPM, a Python FastAPI service, or something else entirely. The important point is this: you do not need to expose the app service to the host machine on ports such as 3000, 8080, or 5000. Because both containers are on the same Docker network, the proxy container can reach it by service name, such as app:80.
For the Nginx site configuration, create the file ./nginx/conf.d/demo.conf. At the first stage, testing without SSL over HTTP is the cleanest approach. Your domain’s DNS record should already point to the server’s IP address. Then this configuration will be enough to test the basic route:
`nginx server { listen 80; server_name example.com www.example.com;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
proxy_pass http://app:80;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
} `
Replace example.com with your own domain. Then place a simple index.html file in the app folder and run docker compose up -d. If the sample application appears when you open your domain in the browser, the basic proxy path is working. One of the most common mistakes here is that the application container listens only on localhost. If the service inside the container is bound to 127.0.0.1, other containers cannot reach it. For Node.js and Python applications, you usually need to bind the service to 0.0.0.0.
There are two common approaches for SSL. The first is to run a Certbot container manually, obtain the certificate, and then move the Nginx configuration to HTTPS. The second is to use more automated tools such as Nginx Proxy Manager or Traefik. In this guide, we will continue with the classic Nginx + Certbot method because it keeps control in your hands and teaches the underlying structure more clearly. You can add the Certbot service to docker-compose.yml like this:
`yaml certbot: image: certbot/certbot container_name: certbot volumes:
- ./certbot/www:/var/www/certbot
- ./certbot/conf:/etc/letsencrypt `
Use the following command to obtain the first certificate:
bash docker compose run --rm certbot certonly --webroot --webroot-path /var/www/certbot -d example.com -d www.example.com
After the certificate is created successfully, add a 443 block to the Nginx file and redirect HTTP traffic to HTTPS. The configuration can look like this:
`nginx server { listen 80; server_name example.com www.example.com;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$host$request_uri;
}
}
server { listen 443 ssl http2; server_name example.com www.example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
location / {
proxy_pass http://app:80;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
}
} `
After saving this file, test the configuration with docker compose exec nginx nginx -t. If there are no errors, reload Nginx with docker compose exec nginx nginx -s reload. You usually do not need to stop and start the whole container. A small but important habit: after changing an Nginx file, test it with nginx -t before restarting or reloading. A single missing semicolon can affect every live site on the server.
When you want to publish multiple sites, creating a separate conf file for each one is enough. For example, you can write api.conf for api.example.com and panel.conf for panel.example.com. If services such as app, api, and panel are connected to the same proxy network in docker-compose.yml, Nginx can route each domain to the right service. This setup is efficient for small VPS environments. Still, when you place many services on one server, do not ignore log sizes, disk usage, and backups. Publishing a web server is not only about starting the application; firewall and SSH security are part of the same picture. If you use a public VPS, the UFW installation and basic firewall guide can help you keep only the necessary ports open.
On the security side, the cleanest starting point is to avoid exposing application ports other than 80 and 443 on the host. If you do not write ports for the app service in Docker Compose, the outside world cannot reach it directly. Only Nginx on the same Docker network can access it. This is especially important for admin panels and API services. If a database also runs inside Compose, do not expose the MySQL or PostgreSQL port to the internet. Let the application container connect to the database by service name. If external access is truly needed, solve it with a temporary, limited, and secure method.
For performance, you do not need to drown the first day in complex tuning. Nginx is already lightweight for serving static files and proxying requests. Even so, applications that upload large files may require a higher client_max_body_size value. Services that use WebSocket need Upgrade and Connection headers. If you want the application to see the real user IP address, X-Forwarded-For and X-Real-IP must be sent correctly. If your application framework does not know it is running behind a proxy, you may run into HTTPS redirect loops, incorrect callback URLs, or secure cookie issues.
For certificate renewal, you need to run certbot renew regularly. You can do this with cron on the host or with a simple systemd timer. After renewal, Nginx must be reloaded so it can read the new certificate. A practical cron line follows this logic: run certbot renew, and if it succeeds, reload Nginx. On a production server, it is healthier to test the command manually first and then connect it to automation. Because certificates are renewed before they expire, users usually do not notice any interruption.
If something breaks, the first places to check are clear: are the containers running with docker compose ps, what does Nginx say in docker compose logs nginx, does nginx -t accept the configuration, does DNS really point to the correct IP address, and does the firewall allow ports 80 and 443? A “502 Bad Gateway” error usually means Nginx cannot reach the service behind it. The service name may be wrong, the application may be listening on a different port, or the container may not be connected to the same network. An “SSL certificate not found” error usually appears when the certificate path and domain name do not match.
The best part of setting up an Nginx proxy with Docker Compose is that the system can grow without becoming messy. You might start with one static site today, add an API tomorrow, and later move the admin panel into a separate container. Nginx shows the same calm front door to the outside world while you organize the services inside however you like. For small and medium-sized projects in 2026, this is still one of the most sensible setups: a simple Compose file, readable Nginx rules, automatically renewed SSL, closed application ports, and regular log monitoring. It is not flashy, but when configured properly, it can run for a long time without causing headaches.