Proxies in Kubernetes for Pods and Services are configured primarily through environment variables for egress traffic, sidecar containers for granular control and service mesh integration, and Ingress resources or Service types for ingress traffic management. This article details the mechanisms for implementing proxy configurations within a Kubernetes environment.
Egress Proxy Configuration for Pods
Pods often require an egress proxy to access external networks, enforce security policies, or manage traffic routing outside the cluster.
Environment Variables
The most straightforward method for configuring an egress proxy for applications within a Pod is through standard HTTP/HTTPS proxy environment variables. Many applications and client libraries automatically respect these variables.
HTTP_PROXY: Specifies a proxy server for HTTP requests.HTTPS_PROXY: Specifies a proxy server for HTTPS requests.NO_PROXY: A comma-separated list of hostnames, domains, or IP addresses that should bypass the proxy. This is crucial for internal cluster communication (e.g.,kubernetes.default.svc,10.0.0.0/8,*.svc.cluster.local).
These variables are defined in the Pod's container specification.
apiVersion: v1
kind: Pod
metadata:
name: my-app-with-proxy
spec:
containers:
- name: my-app
image: my-application-image:latest
env:
- name: HTTP_PROXY
value: "http://proxy.internal.domain:3128"
- name: HTTPS_PROXY
value: "http://proxy.internal.domain:3128"
- name: NO_PROXY
value: "localhost,127.0.0.1,.svc,.cluster.local,10.0.0.0/8" # Example: adjust for your cluster CIDR
Init Containers for Dynamic Proxy Configuration
For more complex scenarios, an Init Container can prepare proxy configurations or inject certificates before the main application container starts. This is useful for:
- Fetching proxy details from a configuration service.
- Generating
NO_PROXYlists based on cluster internal IP ranges. - Injecting custom CA certificates required by the proxy.
apiVersion: v1
kind: Pod
metadata:
name: my-app-init-proxy
spec:
initContainers:
- name: init-proxy-config
image: alpine/git # Example: A simple image to fetch config
command: ["sh", "-c", "echo 'HTTP_PROXY=http://dynamic-proxy:3128' > /mnt/proxy/config"]
volumeMounts:
- name: proxy-config-volume
mountPath: /mnt/proxy
containers:
- name: my-app
image: my-application-image:latest
command: ["sh", "-c", ". /mnt/proxy/config && exec my-app-binary"] # Source config
volumeMounts:
- name: proxy-config-volume
mountPath: /mnt/proxy
volumes:
- name: proxy-config-volume
emptyDir: {}
Sidecar Proxies for Egress
A sidecar proxy runs in the same Pod as the application container. It intercepts all outbound traffic from the application container, routing it through the proxy. This pattern provides:
- Network Transparency: The application container does not need to be proxy-aware; the sidecar handles the proxy logic.
- Centralized Policy: Egress policies (e.g., allow/deny lists, rate limiting, authentication) can be enforced at the sidecar level, independent of the application.
- Observability: The sidecar can emit metrics, logs, and traces for all egress traffic.
Common sidecar proxies include Envoy (used by Istio) or Linkerd's proxy.
apiVersion: v1
kind: Pod
metadata:
name: my-app-with-sidecar-egress
spec:
containers:
- name: my-app
image: my-application-image:latest
# Application traffic is redirected to the sidecar via iptables rules
- name: egress-proxy-sidecar
image: envoyproxy/envoy:v1.28.0 # Example Envoy image
ports:
- containerPort: 15001 # Port for application to send traffic to
volumeMounts:
- name: envoy-config
mountPath: /etc/envoy
# Configuration for Envoy to forward traffic to external proxy or directly
# (typically managed by a service mesh control plane)
volumes:
- name: envoy-config
configMap:
name: egress-envoy-config
Ingress Proxy Configuration for Services
For incoming traffic to Services, Kubernetes offers several mechanisms that inherently involve proxying.
Kubernetes Services (kube-proxy)
The kube-proxy component running on each node handles the virtual IP (VIP) for Kubernetes Services. When a client inside or outside the cluster attempts to connect to a Service's ClusterIP, kube-proxy intercepts the traffic and forwards it to one of the Pods backing that Service. This acts as a Layer 4 (TCP/UDP) proxy and load balancer.
- Mode:
kube-proxyoperates iniptables(default) oripvsmode.- iptables: Uses
iptablesrules to perform NAT and load balancing. - ipvs: Uses IP Virtual Server (IPVS) for more sophisticated load balancing algorithms and better performance for large numbers of services.
- iptables: Uses
This proxying is automatic and requires no explicit configuration within Pods or Services beyond defining the Service itself.
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 8080
type: ClusterIP # Internal proxying by kube-proxy
Ingress Resources and Ingress Controllers
For external HTTP/HTTPS access to Services, an Ingress resource is used in conjunction with an Ingress Controller. The Ingress Controller itself is a specialized proxy that runs within the cluster.
- Ingress Controller: A Pod (or set of Pods) running a proxy like NGINX, HAProxy, Traefik, or an AWS ALB/GCE L7 Load Balancer controller. It watches Ingress resources and configures its proxy rules dynamically.
- Ingress Resource: Defines rules for routing external HTTP/HTTPS traffic to Services. This includes host-based routing, path-based routing, and TLS termination.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- host: api.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service
port:
number: 80
tls: # Optional: TLS termination at the Ingress Controller
- hosts:
- api.example.com
secretName: my-tls-secret
LoadBalancer Services
When a Service of type LoadBalancer is created, it provisions an external load balancer (typically from the cloud provider). This external load balancer acts as a proxy, forwarding traffic to the NodePorts of the Service, which kube-proxy then routes to the backing Pods.
apiVersion: v1
kind: Service
metadata:
name: my-external-service
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 8080
type: LoadBalancer # External proxying by cloud provider LB
Sidecar Proxies for Ingress (Service Mesh)
In a service mesh (e.g., Istio, Linkerd), sidecar proxies are injected into Pods to handle both ingress and egress traffic for the application container. For ingress, the sidecar intercepts incoming traffic to the application and applies mesh policies (e.g., authentication, authorization, traffic shifting, retries, circuit breaking) before forwarding it to the application container.
This provides:
- Policy Enforcement: Fine-grained control over how services communicate.
- Observability: Automatic metrics, logging, and distributed tracing for all service-to-service communication.
- Traffic Management: Advanced routing capabilities (e.g., canary deployments, A/B testing).
- Security: Mutual TLS (mTLS) between services.
The sidecar injection and configuration are typically automated by the service mesh's control plane.
apiVersion: v1
kind: Pod
metadata:
name: my-app-with-mesh-sidecar
labels:
app: my-app
# Example for Istio: triggers automatic sidecar injection
istio-injection: enabled
spec:
containers:
- name: my-app
image: my-application-image:latest
ports:
- containerPort: 8080
# The Istio control plane automatically adds an 'istio-proxy' sidecar container
# and configures iptables rules to redirect traffic.
Comparison of Proxy Approaches
| Feature | Environment Variables | Sidecar Proxy (Manual) | Sidecar Proxy (Service Mesh) | Ingress Controller | LoadBalancer Service |
|---|---|---|---|---|---|
| Scope | Egress traffic from a specific container | Egress/Ingress for a specific Pod | Egress/Ingress for all mesh-enabled Pods | External Ingress to Services | External Ingress to Services |
| Layer | L7 (HTTP/S) | L4/L7 | L4/L7 | L7 (HTTP/S) | L4 (TCP/UDP) |
| Configuration | Pod env |
Pod containers, volumes, iptables (manual) |
Automated by control plane | Ingress resource, Controller config | Service type, Cloud provider |
| Complexity | Low | High (manual iptables, config) | Medium (initial mesh setup) | Medium (Ingress rules, controller) | Low (Service definition) |
| Capabilities | Basic egress proxy | Granular traffic control, observability, security | Advanced traffic management, mTLS, observability, security | Path/host routing, TLS termination, L7 features | Basic L4 load balancing, external IP |
| Primary Use Case | Simple external proxy requirements | Custom, fine-grained proxy per app | Microservices communication, policy, observability | External HTTP/S access, L7 routing | External L4 access, cloud integration |
| Application Impact | App must respect env vars | App is unaware (transparent) | App is unaware (transparent) | App is unaware (Service target) | App is unaware (Service target) |
Practical Considerations
Certificate Management
Proxies often terminate or originate TLS connections.
- Ingress Controllers: Typically manage TLS certificates via Kubernetes Secrets, allowing for TLS termination at the edge of the cluster.
- Sidecar Proxies: In a service mesh, sidecars handle mTLS automatically, often using short-lived certificates issued by the mesh's certificate authority. For egress, they might need to trust custom CAs to inspect encrypted traffic or connect to internal services.
Performance and Overhead
Each proxy introduces an additional hop in the network path, which can add latency.
kube-proxy: Highly optimized.ipvsmode offers better performance for large service counts.- Sidecars: Introduce CPU, memory, and network overhead per Pod. This overhead is a design consideration for service mesh deployments.
- Ingress Controllers: Can be a bottleneck if not scaled appropriately for high traffic volumes.
Observability
Proxies provide a centralized point for collecting telemetry.
- Logs: Proxies can log connection details, request/response headers, and errors.
- Metrics: Standard metrics like request rates, latencies, error rates can be scraped.
- Tracing: Sidecar proxies in a service mesh enable distributed tracing without application code changes.
Security
Proxies are critical enforcement points for security policies.
- Access Control: Proxies can enforce authentication and authorization policies for incoming and outgoing traffic.
- Network Segmentation: Proxies can isolate application traffic, preventing direct connections between services that should not communicate.
- DDoS Protection: External proxies (like Ingress Controllers or cloud LBs) can offer DDoS mitigation capabilities.