Skip to content
Proxy Types 7 Connection Type: 1 views

Nginx as Proxy Server

Explore a comprehensive guide to configuring Nginx as a proxy server. Understand setup, benefits, and practical usage for improved web performance.

Nginx functions as a high-performance proxy server by efficiently forwarding client requests to backend servers and returning their responses, enabling features like load balancing, SSL termination, and content caching.

Introduction

Nginx (Engine-X) is an open-source web server that can also operate as a reverse proxy, HTTP cache, and load balancer. Its event-driven architecture allows it to handle a large number of concurrent connections efficiently, making it suitable for high-traffic environments. As a proxy, Nginx sits between clients and backend servers, mediating traffic and adding layers of functionality.

Why Use Nginx as a Proxy?

Deploying Nginx as a proxy server offers several operational advantages:

  • Load Balancing: Distributes incoming network traffic across multiple backend servers to improve application responsiveness and reliability.
  • SSL/TLS Termination: Handles encrypted connections from clients, decrypting traffic before forwarding it to backend servers, which can then operate on unencrypted HTTP. This offloads cryptographic processing from application servers.
  • Caching: Stores frequently accessed content, reducing the load on backend servers and decreasing response times for clients.
  • Security: Acts as a buffer, shielding backend servers from direct client access and potential attacks. It can filter requests and enforce access policies.
  • High Availability: In conjunction with load balancing, Nginx can route traffic away from unhealthy backend servers, ensuring continuous service.
  • Traffic Management: Allows for URL rewriting, request filtering, and content manipulation.

Nginx Proxy Types

Nginx primarily operates in two proxy modes: reverse proxy and forward proxy.

Reverse Proxy

A reverse proxy retrieves resources on behalf of a client from one or more servers. These resources are then returned to the client, appearing as if they originated from the proxy server itself. Clients are unaware of the backend architecture.

Forward Proxy

A forward proxy retrieves resources from a wide range of servers on behalf of a client. It acts as an intermediary for clients requesting resources from external servers. Clients are explicitly configured to use the forward proxy.

Feature Reverse Proxy Forward Proxy
Client Awareness Client is unaware of the proxy; requests target proxy. Client is aware of and configured to use the proxy.
Purpose Protects and optimizes backend servers; load balancing. Allows clients to access external resources; security/filtering.
Location Typically deployed in front of web servers. Typically deployed at the client's network edge.
Transparency Appears as the origin server to the client. Appears as an intermediary to the client.

Basic Reverse Proxy Configuration

Configuring Nginx as a reverse proxy involves defining a server block that listens for incoming requests and then uses the proxy_pass directive to forward them to an upstream server.

Prerequisites

  • An installed Nginx instance.
  • Access to Nginx configuration files (typically /etc/nginx/nginx.conf or files within /etc/nginx/sites-available/).
  • A backend server (e.g., an application server, another web server) running and accessible to the Nginx server.

Core Configuration Directives

  • proxy_pass: The fundamental directive for forwarding requests. Specifies the protocol, address, and optional port of the proxied server.
  • proxy_set_header: Modifies request headers that Nginx sends to the proxied server. Essential for passing client IP, host, and protocol information.
  • proxy_buffering: Controls whether Nginx buffers responses from the proxied server. Buffering can improve performance by allowing Nginx to receive the entire response before sending it to the client.
  • proxy_cache: Enables caching of responses from proxied servers.
# /etc/nginx/sites-available/my_reverse_proxy.conf

server {
    listen 80;
    server_name example.com www.example.com;

    location / {
        # The target backend server
        proxy_pass http://backend_app_server:8080;

        # Pass original host and IP to the backend
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # Disable proxy buffering for streaming, enable for traditional web
        # proxy_buffering on; 
    }

    # Optional: Serve static files directly from Nginx
    location /static/ {
        root /var/www/my_app;
        expires 30d;
    }
}

After creating this file, enable it by creating a symbolic link to sites-enabled:
sudo ln -s /etc/nginx/sites-available/my_reverse_proxy.conf /etc/nginx/sites-enabled/
Then, test the Nginx configuration and reload:
sudo nginx -t
sudo systemctl reload nginx

Advanced Reverse Proxy Configuration

Load Balancing

Nginx can distribute requests across multiple backend servers using various load balancing algorithms. The upstream block defines a group of servers.

# In nginx.conf or a separate file included by http block

upstream backend_servers {
    # Round-robin (default)
    server backend_server1.example.com:8080;
    server backend_server2.example.com:8080;
    server 192.168.1.100:8080; # Can use IP addresses

    # Weighted round-robin
    # server backend_server1.example.com:8080 weight=3;
    # server backend_server2.example.com:8080 weight=1;

    # Least connections
    # least_conn;

    # IP Hash (sticky sessions based on client IP)
    # ip_hash;

    # Health checks (requires Nginx Plus or specific modules)
    # server backend_server1.example.com:8080 max_fails=3 fail_timeout=30s;
}

server {
    listen 80;
    server_name myapp.example.com;

    location / {
        proxy_pass http://backend_servers; # Reference the upstream block
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

SSL/TLS Termination

Nginx can terminate SSL/TLS connections, offloading the encryption/decryption process from backend servers. This requires SSL certificates and keys.

server {
    listen 443 ssl;
    server_name secure.example.com;

    ssl_certificate /etc/nginx/ssl/secure.example.com.crt;
    ssl_certificate_key /etc/nginx/ssl/secure.example.com.key;

    # Recommended SSL settings for security and performance
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers 'TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384';
    ssl_prefer_server_ciphers on;
    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 1h;
    ssl_stapling on;
    ssl_stapling_verify on;
    resolver 8.8.8.8 8.8.4.4 valid=300s;
    resolver_timeout 5s;

    location / {
        proxy_pass http://backend_app_server:8080; # Backend can be HTTP
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto https; # Inform backend about original protocol
    }
}

# Optional: Redirect HTTP to HTTPS
server {
    listen 80;
    server_name secure.example.com;
    return 301 https://$host$request_uri;
}

Caching

Nginx can cache responses from proxied servers, significantly reducing latency and backend load for static or infrequently changing content.

# In http block (outside any server block)
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=my_cache:10m inactive=60m max_size=1g;
proxy_cache_key "$scheme$request_method$host$request_uri";

server {
    listen 80;
    server_name cache.example.com;

    location / {
        proxy_pass http://backend_app_server:8080;
        proxy_set_header Host $host;

        proxy_cache my_cache; # Enable caching for this location
        proxy_cache_valid 200 302 10m; # Cache 200/302 responses for 10 minutes
        proxy_cache_valid 404 1m;    # Cache 404 responses for 1 minute
        proxy_cache_bypass $http_pragma $http_authorization; # Do not cache if these headers are present
        proxy_no_cache $http_pragma $http_authorization; # Do not use cache if these headers are present

        add_header X-Proxy-Cache $upstream_cache_status; # Add header to see cache status
    }
}

WebSockets Proxying

Proxying WebSockets requires specific header manipulation to handle the Upgrade and Connection headers for protocol switching.

server {
    listen 80;
    server_name websocket.example.com;

    location /ws/ {
        proxy_pass http://backend_websocket_server:8081;

        # WebSocket-specific headers
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;
        proxy_read_timeout 86400s; # Adjust as needed for long-lived connections
    }
}

Basic Forward Proxy Configuration

Configuring Nginx as a forward proxy allows clients to route their outbound requests through Nginx. This is typically used in corporate networks for access control or logging.

Configuration Directives

  • resolver: Specifies DNS servers Nginx should use to resolve hostnames.
  • proxy_pass: Used within a location block, but with a variable for the target URL.
# In http block (outside any server block)
resolver 8.8.8.8 8.8.4.4 valid=300s; # Google Public DNS, adjust as needed

server {
    listen 3128; # Common port for proxy servers
    listen [::]:3128;

    # Restrict access to authorized clients (e.g., internal network)
    allow 192.168.1.0/24;
    deny all;

    location / {
        proxy_pass $scheme://$host$request_uri;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        # Disable caching for a general-purpose forward proxy
        proxy_no_cache 1;
        proxy_cache_bypass 1;
    }
}

Clients would then configure their browsers or applications to use nginx_ip_address:3128 as a proxy.

Monitoring and Troubleshooting

Effective operation of Nginx as a proxy requires monitoring and troubleshooting capabilities.

  • Configuration Testing: Always validate configuration files before reloading Nginx.
    sudo nginx -t
  • Service Status: Check Nginx service status.
    sudo systemctl status nginx
  • Access Logs: Nginx records every request it processes in the access log, typically located at /var/log/nginx/access.log. These logs provide details such as client IP, request method, URL, status code, and response size.
  • Error Logs: Critical errors, warnings, and debugging information are written to the error log, usually at /var/log/nginx/error.log. Monitor this file for issues with configuration, backend connectivity, or resource limitations.
  • Backend Health: Ensure backend servers are operational and accessible from the Nginx server. Use tools like curl or ping from the Nginx machine to test connectivity.
  • Network Connectivity: Verify network paths between clients, Nginx, and backend servers.
  • Resource Utilization: Monitor CPU, memory, and disk I/O on the Nginx server to identify bottlenecks.
Auto-update: 03.03.2026
All Categories

Advantages of our proxies

25,000+ proxies from 120+ countries