Skip to content
Glossary 12 Connection Type: 2 views

Upstream Proxy

Explore upstream proxies and how they function. Learn step-by-step to configure cascading proxy servers for enhanced network control.

An upstream proxy is a proxy server that another proxy server forwards client requests to, forming a chain where the first proxy acts as a client to the upstream proxy.

What Is an Upstream Proxy?

When a client sends a request to a proxy server, that proxy server typically fetches the requested resource directly from the origin server on the internet. However, in many architectural designs, the first proxy server does not directly contact the origin. Instead, it forwards the request to another proxy server, known as an upstream proxy. This second proxy then handles the request, either by fulfilling it from its cache, forwarding it to yet another upstream proxy, or fetching it from the origin server.

The primary purposes of utilizing an upstream proxy include:

  • Layered Security: Adding an additional layer of security and access control.
  • Specialized Functions: Delegating specific tasks like content filtering, antivirus scanning, or advanced caching to a dedicated proxy.
  • Network Segmentation: Bridging different network segments or providing controlled access to external networks.
  • Anonymity and Privacy: Obscuring the originating client's IP address through multiple hops.
  • Bypassing Restrictions: Routing traffic through different geographical locations or networks to bypass geoblocks or network restrictions.
  • Load Balancing: Distributing requests among multiple upstream proxies or origin servers.

The flow of a request with an upstream proxy is: Client -> Proxy A -> Upstream Proxy B -> Origin Server. In this scenario, Proxy A is configured to use Proxy B as its upstream.

Cascading Proxies Explained

Cascading proxies, also known as proxy chaining, refer to the architecture where multiple proxy servers are arranged in a sequence, with each proxy forwarding requests to the next one in the chain until the final proxy contacts the origin server. This creates a multi-hop path for client requests.

Advantages of Cascading Proxies

  • Enhanced Security: Each proxy can enforce its own set of security policies, authentication, and access controls.
  • Modular Architecture: Different proxies can specialize in different functions (e.g., one for caching, another for WAF, another for egress control).
  • Complex Routing: Allows for intricate routing logic, directing specific types of traffic through different chains or geographical locations.
  • Increased Anonymity: More hops make it harder to trace the original client.
  • Bypass Redundancy: Provides alternative paths if one proxy in the chain fails or is blocked.

Disadvantages of Cascading Proxies

  • Increased Latency: Each additional proxy hop introduces processing delay and network latency.
  • Complexity: Configuration and management become more intricate, especially for debugging and troubleshooting.
  • Single Points of Failure: If not designed with redundancy, the failure of any proxy in the chain can disrupt service.
  • Debugging Challenges: Tracing the path of a request and identifying issues can be difficult across multiple servers.
  • Header Management: Proper handling of headers like X-Forwarded-For and Via is crucial for logging and origin server awareness.

Configuring an Upstream Proxy

Configuration examples for common proxy servers, demonstrating how to set up a proxy to forward requests to an upstream.

Nginx (as a Reverse Proxy for an Upstream)

Nginx typically acts as a reverse proxy. To configure an Nginx instance to forward requests to an upstream proxy, you define the upstream proxy's address in the proxy_pass directive within a location block.

# Nginx Configuration (e.g., proxy.example.com)
# This Nginx instance acts as Proxy A, forwarding to Upstream Proxy B.

http {
    upstream upstream_proxy_b {
        # Address of your upstream proxy (Proxy B)
        server 192.168.1.100:3128; # Example: IP and port of Proxy B
        # Optional: Add multiple servers for load balancing/failover if Proxy B has multiple instances
        # server 192.168.1.101:3128;
    }

    server {
        listen 80;
        server_name proxy.example.com;

        location / {
            # Pass requests to the defined upstream_proxy_b
            proxy_pass http://upstream_proxy_b;

            # Recommended headers for proxy chains
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header Via "1.1 $hostname"; # Add current proxy to Via header
        }
    }
}

Squid (as a Forward Proxy for an Upstream)

Squid is commonly used as a forward proxy. To configure Squid to use an upstream proxy, you use the cache_peer directive.

# Squid Configuration (e.g., proxy-a.conf)
# This Squid instance acts as Proxy A, forwarding to Upstream Proxy B.

# Define the upstream proxy (Proxy B)
# Syntax: cache_peer hostname type http_port icp_port [options]
# type: parent or sibling (parent is more common for upstream)
# http_port: HTTP port of the upstream proxy
# icp_port: ICP port (usually 0 if not used or unknown)
cache_peer 192.168.1.100 parent 3128 0 no-query default_parent

# Example for multiple upstream proxies (Proxy B and Proxy C)
# cache_peer 192.168.1.101 parent 3128 0 no-query round-robin
# cache_peer 192.168.1.102 parent 3128 0 no-query

# Access control to allow clients to use this proxy
acl localnet src 192.168.0.0/24 # Example: your client network
http_access allow localnet
http_access deny all

# Specify the port Squid listens on for client requests
http_port 3129 # Proxy A listens on 3129

# Optional: Add Via header
via off # Squid adds Via headers by default; 'off' means Squid won't add its own 'Via' header,
        # but it will forward existing ones. To ensure it *adds* its own, remove this line or set it to 'on'.
        # For cascading, usually 'via on' or default behavior is preferred.

In this Squid configuration, proxy-a listens on port 3129 and forwards all requests to 192.168.1.100:3128 (Proxy B) because of the default_parent option.

Configuring Cascading Proxies

To set up cascading proxies, you configure each proxy in the chain to point to the next one.

Scenario: Client -> Proxy A -> Proxy B -> Internet

This scenario involves two proxies in a chain.

Proxy A Configuration (Front-end Proxy)

Proxy A is the first point of contact for clients and forwards requests to Proxy B.

Nginx as Proxy A:
(Refer to the Nginx example under "Configuring an Upstream Proxy" above. The upstream_proxy_b block and proxy_pass http://upstream_proxy_b; directive configure Nginx to send requests to Proxy B.)

Squid as Proxy A:
(Refer to the Squid example under "Configuring an Upstream Proxy" above. The cache_peer 192.168.1.100 parent 3128 0 no-query default_parent directive configures Squid to send requests to Proxy B.)

Proxy B Configuration (Intermediate/Upstream Proxy)

Proxy B receives requests from Proxy A and then forwards them to the internet (or to another upstream proxy if the chain is longer).

Nginx as Proxy B:
If Proxy B is also an Nginx instance, it would typically be configured as a standard reverse proxy that forwards to the origin server. If it's acting solely as an egress point or a further proxy, its configuration would be similar to Proxy A's, but its proxy_pass target would be the next proxy or the actual origin.

# Nginx Configuration for Proxy B (192.168.1.100)
# This Nginx instance receives requests from Proxy A and forwards to the internet.

http {
    server {
        listen 3128; # Proxy B listens on 3128 for requests from Proxy A
        server_name proxy-b.example.com; # Or simply listen on IP

        location / {
            # Here, Proxy B could forward directly to the origin based on the client's original request
            # Or, it could forward to *another* upstream if the chain continues.
            # For simplicity, assuming it forwards to the internet based on the original request.
            # Nginx as a forward proxy is more complex and usually involves dynamic resolution.
            # A more common scenario is Nginx acting as a reverse proxy for a known set of origins,
            # or for a single "next hop" proxy.

            # Example: If Proxy B forwards to a specific known origin
            # proxy_pass http://www.example.com;

            # If Proxy B is to act as a general forward proxy for arbitrary domains (less common for Nginx)
            # This requires dynamic resolution and is often handled by specialized forward proxies like Squid.
            # For Nginx to function as a general forward proxy, it often involves custom Lua scripting or modules.
            # A simpler Nginx Proxy B might just pass to *another* fixed upstream.
            # For a true "internet" forward, Squid is usually preferred.

            # If Proxy B is just the egress point and doesn't know the final origin (like Squid)
            # This is where Nginx's capabilities as a *general purpose* forward proxy are limited.
            # If Proxy B is an Nginx, it's more likely to be a reverse proxy for specific services
            # or a next hop in a defined chain where the next hop is also fixed.
            # For the purpose of cascading, if Nginx is Proxy B, it would typically be configured
            # to pass to a *known* next hop or a known origin.
            # For a "general internet" egress, Squid is more appropriate.

            # Let's assume Proxy B is forwarding to a known *final* upstream, e.g., an anonymizing proxy
            # or a specific egress gateway.
            proxy_pass http://final_egress_proxy:8080; # Example: forwarding to a final egress proxy
            # Or, if it's the last hop before origin and Nginx is configured for dynamic upstream resolution:
            # proxy_pass $scheme://$host$request_uri; # Requires custom setup for dynamic forward proxy
            # For simplicity, let's assume Proxy B is a Squid proxy for general internet access.

            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header Via "1.1 $hostname";
        }
    }
}

Squid as Proxy B:
Proxy B receives requests from Proxy A and is configured to either fetch directly from the internet or forward to a further upstream. If it's the last proxy before the internet, it doesn't need a cache_peer directive pointing to another proxy (unless it's for specific domains or failover).

# Squid Configuration for Proxy B (192.168.1.100)
# This Squid instance receives requests from Proxy A and fetches from the internet.

# Proxy B listens on 3128 for requests from Proxy A
http_port 3128

# Optional: If Proxy B also has its own upstream (e.g., an ISP proxy or another specific proxy)
# cache_peer upstream.isp.com parent 8080 0 no-query default_parent

# Access control to allow Proxy A (and other authorized clients) to connect
acl proxy_a_network src 192.168.1.0/24 # Example: network where Proxy A resides
http_access allow proxy_a_network
http_access deny all

# Configure caching, logging, etc. as needed for Proxy B

Advanced Cascading: Conditional Upstreams

For more complex routing, you might want to send requests for specific domains or paths to different upstream proxies.

Nginx Conditional Upstreams:

http {
    # Upstream group for general traffic
    upstream general_upstream {
        server 192.168.1.100:3128; # Proxy B
    }

    # Upstream group for specific sensitive traffic
    upstream secure_upstream {
        server 192.168.2.200:4444; # Secure Proxy C
    }

    server {
        listen 80;
        server_name proxy.example.com;

        location /secure/ {
            # Requests to /secure/ go to Secure Proxy C
            proxy_pass http://secure_upstream;
            proxy_set_header Host $host;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }

        location / {
            # All other requests go to Proxy B
            proxy_pass http://general_upstream;
            proxy_set_header Host $host;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }
    }
}

Squid Conditional Upstreams:

# Define multiple cache_peers
cache_peer 192.168.1.100 parent 3128 0 no-query name=general_proxy
cache_peer 192.168.2.200 parent 4444 0 no-query name=secure_proxy

# ACLs to identify specific traffic
acl secure_domains dstdomain .secure.example.com
acl secure_urls url_regex ^https://secure\.

# Direct requests based on ACLs
cache_peer_access secure_proxy allow secure_domains
cache_peer_access secure_proxy allow secure_urls
cache_peer_access general_proxy allow all

# Ensure traffic not matching an explicit peer access rule goes direct or to default_parent
# if no default_parent is set, requests may fail or go direct based on 'never_direct'/'always_direct'
# It's usually better to have a default_parent or a final 'cache_peer_access' to catch all.

Key Considerations for Cascading Proxies

Latency and Performance

Each proxy in the chain adds processing time and network latency. Minimizing the number of hops and ensuring high-performance proxies and network links are crucial. Monitor end-to-end latency to identify bottlenecks.

Security and Authentication

  • Inter-proxy Authentication: Implement authentication (e.g., IP whitelisting, shared secrets, client certificates) between proxies to prevent unauthorized access to intermediate proxies.
  • SSL/TLS Termination: Decide where SSL/TLS termination occurs. If proxies decrypt traffic, ensure proper certificate management and re-encryption for subsequent hops.
  • Access Control: Configure strict access control lists (ACLs) on each proxy to allow traffic only from authorized upstream/downstream proxies or clients.

Debugging and Troubleshooting

Tracing requests through multiple proxies can be complex.
* X-Forwarded-For Header: This header tracks the original client IP address and subsequent proxy IPs in the chain. Ensure it's correctly appended at each hop ($proxy_add_x_forwarded_for in Nginx).
* Via Header: The Via header indicates the intermediate proxies through which the request has passed. Each proxy should add its identifier to this header.
* Logging: Centralized logging and correlation IDs are essential for tracing requests across the entire proxy chain.

Load Balancing and Failover

To ensure high availability and distribute traffic, configure load balancing for upstream proxies.
* Nginx: Use the upstream block with multiple server directives and options like least_conn, round_robin, backup, down.
nginx upstream proxy_b_cluster { server 192.168.1.100:3128 weight=5; # Primary Proxy B server 192.168.1.101:3128 backup; # Backup Proxy B server 192.168.1.102:3128; # Another Primary Proxy B least_conn; # Use least connections method } # Then proxy_pass http://proxy_b_cluster;
* Squid: Use multiple cache_peer directives with options like round-robin, weighted-round-robin, failover, and no-query.
squid cache_peer 192.168.1.100 parent 3128 0 no-query weight=10 cache_peer 192.168.1.101 parent 3128 0 no-query weight=5 cache_peer 192.168.1.102 parent 3128 0 no-query round-robin

Header Management

Careful management of HTTP headers is critical. Beyond X-Forwarded-For and Via, consider Proxy-Authorization for authenticating with upstream proxies and ensuring other relevant headers are forwarded or stripped as per policy.

Comparison: Single Upstream vs. Cascading Upstreams

Feature / Aspect Single Upstream Proxy Cascading Upstream Proxies
Complexity Low; simpler configuration and management. High; intricate configuration, management, and debugging across multiple servers.
Latency Minimal overhead; one additional network hop. Increased overhead; multiple network hops and processing delays per proxy.
Flexibility Limited to the capabilities of a single proxy. High; allows for specialized functions at each stage and complex routing.
Security Layers Single layer of security and access control. Multi-layered security; each proxy can enforce distinct policies.
Anonymity Provides basic anonymity from the origin. Enhanced anonymity; harder to trace the original client through multiple hops.
Debugging Relatively straightforward. Challenging; requires careful header management (X-Forwarded-For, Via) and logging.
Failure Impact Failure of the single upstream stops all traffic. Failure of any proxy in the chain can disrupt service if no failover is configured.
Use Cases Basic content filtering, caching, simple access control. Advanced security, compliance, geo-routing, specialized services, high-anonymity needs.
Auto-update: 03.03.2026
All Categories

Advantages of our proxies

25,000+ proxies from 120+ countries