Skip to content
Guides 5 Connection Type: 1 views

Scraping with Proxy Using Beautiful Soup and requests

This guide details how to integrate proxies with Beautiful Soup and requests for robust web scraping. Avoid IP bans and ensure successful data extraction.

Parsing Python

Scraping with Beautiful Soup and requests using a proxy involves configuring the requests library to route web traffic through a specified proxy server before parsing the HTML content with Beautiful Soup, enabling IP rotation, bypassing geo-restrictions, and mitigating IP blocks.

When performing web scraping, direct requests from a single IP address can lead to rate limiting, temporary or permanent IP bans, or geo-restricted content. Proxy services address these challenges by routing requests through different IP addresses, making scraping more robust and scalable.

The core libraries for this task are:
* requests: An HTTP library for Python, simplifying sending HTTP requests.
* Beautiful Soup: A Python library for parsing HTML and XML documents.

Configuring Proxies with requests

The requests library supports proxy configuration via the proxies parameter in its request methods.

Single Proxy Configuration

To use a single proxy, provide a dictionary mapping protocols (HTTP, HTTPS) to the proxy URL.

import requests

proxy_http = "http://your_proxy_ip:port"
proxy_https = "https://your_proxy_ip:port" # Often identical to HTTP

proxies = {
    "http": proxy_http,
    "https": proxy_https,
}

try:
    response = requests.get("http://httpbin.org/ip", proxies=proxies, timeout=10)
    response.raise_for_status()
    print("External IP:", response.json().get('origin'))
except requests.exceptions.RequestException as e:
    print(f"Request failed: {e}")

Proxy Authentication

For proxies requiring authentication, embed credentials directly into the proxy URL.

import requests

proxy_user = "your_username"
proxy_pass = "your_password"
proxy_host = "your_proxy_ip"
proxy_port = "port"

authenticated_proxy = f"http://{proxy_user}:{proxy_pass}@{proxy_host}:{proxy_port}"

proxies = {
    "http": authenticated_proxy,
    "https": authenticated_proxy,
}

try:
    response = requests.get("http://httpbin.org/ip", proxies=proxies, timeout=10)
    response.raise_for_status()
    print("External IP (authenticated):", response.json().get('origin'))
except requests.exceptions.RequestException as e:
    print(f"Authenticated request failed: {e}")

Rotating Proxies

For large-scale scraping, rotating through a list of proxies is essential to distribute requests and minimize the risk of being blocked.

import requests
import random
import time

proxy_list = [
    "http://user1:pass1@proxy1.example.com:8000",
    "http://user2:pass2@proxy2.example.com:8000",
    "http://user3:pass3@proxy3.example.com:8000",
]

def get_random_proxy():
    return random.choice(proxy_list)

def make_proxied_request(url, headers=None, attempt_limit=3):
    for attempt in range(attempt_limit):
        proxy_url = get_random_proxy()
        proxies = {"http": proxy_url, "https": proxy_url}
        try:
            print(f"Attempt {attempt + 1}: Using proxy {proxy_url.split('@')[-1]} for {url}")
            response = requests.get(url, proxies=proxies, headers=headers, timeout=15)
            response.raise_for_status()
            return response
        except requests.exceptions.RequestException as e:
            print(f"Proxy request failed (attempt {attempt + 1}): {e}")
            time.sleep(random.uniform(2, 5)) # Delay before retrying with a new proxy
    return None

# Example usage with a dummy URL
try:
    target_url = "http://quotes.toscrape.com/"
    response = make_proxied_request(target_url)
    if response:
        print(f"Successfully fetched {target_url} with status code {response.status_code}")
except Exception as e:
    print(f"Failed to fetch {target_url} after multiple retries: {e}")

Parsing HTML with Beautiful Soup

Beautiful Soup transforms HTML content from a requests response into a navigable Python object.

from bs4 import BeautifulSoup

html_doc = """
<html>
<head><title>Example Page</title></head>
<body>
<p class="intro"><b>Welcome!</b></p>
<div id="content">
    <a href="/item1" class="product-link">Product A</a>
    <a href="/item2" class="product-link">Product B</a>
</div>
</body>
</html>
"""

soup = BeautifulSoup(html_doc, 'html.parser')

# Accessing elements
print("Page Title:", soup.title.string)

# Finding specific elements
intro_paragraph = soup.find('p', class_='intro')
print("Intro text:", intro_paragraph.get_text(strip=True))

# Finding all elements by class
product_links = soup.find_all('a', class_='product-link')
for link in product_links:
    print(f"Product: {link.get_text()}, URL: {link['href']}")

Integrating Proxies with Beautiful Soup Scraping

Combining proxy configuration with Beautiful Soup parsing enables a complete scraping workflow.

import requests
from bs4 import BeautifulSoup
import random
import time

# --- Proxy Configuration (as defined previously) ---
proxy_list = [
    "http://user1:pass1@proxy1.example.com:8000",
    "http://user2:pass2@proxy2.example.com:8000",
    # Add more proxies from your service
]

def get_random_proxy():
    return random.choice(proxy_list)

def make_proxied_request(url, headers=None, attempt_limit=3):
    for attempt in range(attempt_limit):
        proxy_url = get_random_proxy()
        proxies = {"http": proxy_url, "https": proxy_url}
        try:
            print(f"Attempt {attempt + 1} for {url}: Using proxy {proxy_url.split('@')[-1]}")
            response = requests.get(url, proxies=proxies, headers=headers, timeout=15)
            response.raise_for_status()
            return response
        except requests.exceptions.RequestException as e:
            print(f"Request failed (attempt {attempt + 1}): {e}")
            time.sleep(random.uniform(2, 5))
    return None

# --- Scraping Logic ---
def scrape_quotes(url):
    headers = {
        "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36"
    }

    response = make_proxied_request(url, headers=headers)

    if response:
        soup = BeautifulSoup(response.text, 'html.parser')
        quotes = soup.find_all('div', class_='quote')

        data = []
        for quote in quotes:
            text = quote.find('span', class_='text').get_text(strip=True)
            author = quote.find('small', class_='author').get_text(strip=True)
            tags = [tag.get_text(strip=True) for tag in quote.find_all('a', class_='tag')]
            data.append({"text": text, "author": author, "tags": tags})
        return data
    return []

# --- Execution ---
if __name__ == "__main__":
    target_url = "http://quotes.toscrape.com/"
    print(f"Starting scrape of {target_url}")
    scraped_data = scrape_quotes(target_url)

    if scraped_data:
        print(f"Scraped {len(scraped_data)} quotes:")
        for i, quote in enumerate(scraped_data[:3]): # Print first 3 for brevity
            print(f"  {i+1}. Author: {quote['author']}, Quote: {quote['text'][:50]}...")
    else:
        print("No data scraped.")

Best Practices and Considerations

User-Agent Headers

Web servers often inspect the User-Agent header to identify the client. A default requests User-Agent can indicate a bot. Mimicking a common browser User-Agent reduces detection.

headers = {
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36",
    "Accept-Language": "en-US,en;q=0.9",
    "Accept-Encoding": "gzip, deflate, br",
    "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
    "Connection": "keep-alive",
}
response = requests.get(url, proxies=proxies, headers=headers)

Request Delays and Throttling

Rapid requests can trigger rate limits or IP bans. Implement delays between requests, especially when rotating proxies. time.sleep() with a random delay range is effective.

import time
import random
# ... inside a loop processing multiple URLs ...
time.sleep(random.uniform(2, 7)) # Wait between 2 and 7 seconds
response = make_proxied_request(next_url, headers=headers)

Error Handling

Robust scraping requires comprehensive error handling for network issues, proxy failures, and server responses.
* requests.exceptions.RequestException: Catches all requests-related errors (connection, timeout, HTTP errors).
* HTTP Status Codes: Check response.status_code. Codes like 403 (Forbidden), 404 (Not Found), 429 (Too Many Requests), or 5xx (Server Error) indicate issues.
* Timeouts: Configure a timeout parameter in requests.get() to prevent indefinite waits.

CAPTCHAs and Advanced Anti-Scraping Measures

Some websites employ advanced detection mechanisms like CAPTCHAs or JavaScript challenges. Proxies help with IP rotation but do not solve these directly. For such cases, consider headless browsers (e.g., Selenium, Playwright) or specialized CAPTCHA-solving services.

Proxy Types Comparison

Feature Datacenter Proxies Residential Proxies
IP Source Commercial servers, cloud providers Real user devices (desktops, mobile) with ISP IPs
Anonymity High, but IPs are often recognized as datacenter Very high, IPs appear as legitimate users
Cost Generally lower Significantly higher
Speed Typically faster, lower latency Can be slower, higher latency
Detection Risk Higher risk of being detected/blocked Lower risk, ideal for bypassing strict anti-bot measures
Use Cases General scraping, public data, less protected sites High-value targets, e-commerce, social media, geo-targeting
  • robots.txt: Respect a website's robots.txt file, which specifies rules for web crawlers. Access it at http://example.com/robots.txt.
  • Terms of Service: Review a website's terms of service. Scraping may be prohibited.
  • Data Usage: Ensure compliance with data protection regulations (e.g., GDPR, CCPA).
  • Impact on Server: Avoid overwhelming the target server with requests. Implement appropriate delays.
Auto-update: 04.03.2026
All Categories

Advantages of our proxies

25,000+ proxies from 120+ countries