50% OFF Residential Proxies for 9 months — use code IPR50 at checkout

Get The Deal
Back to blog

Mastering Selenium Stealth for Effective Web Scraping Techniques

Justas Vitaitis

Last updated -
How to

Key Takeaways

  • Selenium Stealth helps bypass anti bot detection by modifying browser fingerprints (like navigator.webdriver) to make automation look human-like and avoid blocks, CAPTCHAs, or fake content.

  • Setting up is simple with selenium, undetected-chromedriver, and selenium-stealth, and involves tweaking browser options and adding stealth parameters.

  • Human-like behavior is critical; adding delays, scrolling, simulating mouse movements, and rotating IPs/user-agents helps avoid detection.

Ready to get started?

Register now

Web scraping is a smart way to collect data from websites. Businesses, researchers, and developers use it every day.

But now, the websites have become way more sophisticated as they block scraping tools like Selenium quickly and easily. You might face CAPTCHAs, get fake pages, or even have your IP banned if you’re using it.

Selenium Stealth changes how your browser portrays itself to the website. It hides the signs that typically reveal a bot, such as automation flags and suspicious user agent strings. As a result, it makes your scraper act more like a real person.

So, if you’re facing the same problem and want to learn about Selenum Stealth and how it works, keep reading.

Understanding Selenium Stealth

Selenium is one of the most popular tools for automating browsers. It lets you programmatically control a web browser just like a real user would.

Unfortunately, many websites can detect when Selenium is being used. They instantly block it by spotting clues like unusual browser properties, missing plugins, or an unrealistic user agent.

You can use Selenium Stealth to bypass all of that. It uses a set of tricks and tools that help your Selenium webdriver script hide those clues by changing your browser’s “fingerprint”, among other things. This way, websites think it’s a normal user and not a bot.

For instance, it can:

  • Add missing browser features that bots often skip
  • Adjust browser properties to match human behavior
  • Hide the fact that you’re using automation at all
  • Randomize your user agent

How Selenium Stealth Works?

When websites check who’s visiting them, they look for clues that give away bots. These clues are sometimes called “ browser fingerprints ,” and they include things like:

  • Missing browser plugins
  • Weird screen sizes
  • No mouse movement
  • Unusual headers in requests
  • Automation flags in the browser code
  • A suspicious user agent

Selenium Stealth fixes or hides these clues during public bot tests.

For example, websites can identify automated browsers, such as Chrome using Selenium because of a special tag (navigator.webdriver = true). Selenium Stealth changes that to “false”, so it looks normal.

Benefits of Selenium Stealth

If you’ve ever had your scraper blocked, shown artificial content, or been flooded with CAPTCHAs, Selenium Stealth is for you.

Here’s what makes Selenium Stealth powerful:

1. Avoids Detection by Anti-Bot Systems

Websites like Amazon, LinkedIn, and Google use advanced anti-bot detection tools. These systems can tell when something is off, such as missing browser features or strange headers like an outdated user agent.

Selenium Stealth works by correcting these small details. It hides automation flags and helps bypass anti bot measures so your bot looks “normal”.

Note: Selenium Stealth can still fail against more advanced anti-bot detection systems like Cloudflare or DataDome.

2. Reduces CAPTCHA and Block Risks

When sites detect a bot, they often trigger a CAPTCHA. Or worse, they block your IP. With Selenium Stealth, that happens less often.

By imitating real user behavior (such as mouse movement, scrolling, and a rotating user agent), your script doesn’t raise as many red flags. So, you can scrape without interruptions, even on sites that are tough to crack.

Proxies are also essential when using Selenium Stealth to provide additional anonymity and prevent IP blocking.

3. Improves Data Accuracy and Completeness

Some websites show artificial or limited content when they detect a bot. This means you might scrape incomplete or misleading data without even realizing.

Using Selenium Stealth helps you get complete, accurate content by preventing detection. You’ll scrape exactly what a real user sees, which is critical for price tracking, lead generation, or research.

It also allows for seamless interaction with websites that use JavaScript and AJAX.

4. No Need for Heavy Browser Modification

Other scraping tools try to patch browsers or build custom drivers. That’s risky, slow, and often breaks after updates.

Selenium Stealth works within your existing setup. You don’t need to hack anything - just install the library, and you’re ready to go.

5. Works Seamlessly With Headless or Headed Browsers

Whether you’re running headless or not, using Selenium Stealth works great. It even helps if you’re trying to detect headless browsers and blend in.

You can choose how visible you want your scraper to be while still being stealthy in the background.

Getting Started with Selenium Stealth

If you’re ready to use Selenium Stealth, the good news is that it’s simple to set up, even if you’re new to web scraping . You only need Python, Selenium, and a special Python package that applies stealth settings automatically.

Let’s walk through each of the steps so you know exactly what to do.

Step 1: Install All Required Packages

First, you’ll need three main tools:

  • Selenium - the tool that automates your browser
  • Undetected-Chromedriver (uc) - launches Chrome in a “Stealth Mode”
  • Selenium-Stealth - tweaks browser details to avoid detection

Once you have all the required tools, run these commands in your terminal or command prompt:

pip install selenium selenium-stealth undetected-chromedriver

Step 2: Launch Chrome in Stealth Mode

Here's a basic template to get Chrome running with Stealth protection:

import undetected_chromedriver as uc

import time

# Step 1: Create Chrome options
options = uc.ChromeOptions()
options.headless = False  # Set to True if you want to hide the browser window
options.add_argument("--no-sandbox")
options.add_argument("--disable-blink-features=AutomationControlled")

# Step 2: Start driver with undetected-chromedriver
driver = uc.Chrome(options=options)

Note: Python 3.12 no longer has the distutils package, which is required for undetected-chromedriver. Switch to an earlier version to run the code.

Here’s what our code does:

  • options.headless = False shows the browser. You can turn this on (True) to hide it, but headless can be detected more easily.
  • --disable-blink-features=AutomationControlled removes a key signal that the browser is automated.
  • uc.Chrome() starts Chrome in Stealth mode, with the magic done behind the scenes.

Step 3: Add Extra Stealth with "selenium-stealth"

Selenium Stealth changes more fine-grained browser fingerprints that websites often use to catch bots.

Here’s how to add it:

from selenium_stealth import stealth

stealth(
    driver,
    languages=["en-US", "en"],
    vendor="Google Inc.",
    platform="Win32",
    webgl_vendor="Intel Inc.",
    renderer="Intel Iris OpenGL",
    fix_hairline=True,
)

Since code placement matters, here’s a full snippet:

from selenium_stealth import stealth
import undetected_chromedriver as uc
import time

options = uc.ChromeOptions()
options.headless = False  # Set to True if you want to hide the browser window
options.add_argument("--no-sandbox")
options.add_argument("--disable-blink-features=AutomationControlled")

driver = uc.Chrome(options=options)

stealth(
   driver,
   languages=["en-US", "en"],
   vendor="Google Inc.",
   platform="Win32",
   webgl_vendor="Intel Inc.",
   renderer="Intel Iris OpenGL",
   fix_hairline=True,
)

driver.get("https://www.google.com")
time.sleep(2)

is_webdriver = driver.execute_script("return navigator.webdriver")
print("navigator.webdriver:", is_webdriver)

langs = driver.execute_script("return navigator.languages")
print("navigator.languages:", langs)

vendor = driver.execute_script("return navigator.vendor")
platform = driver.execute_script("return navigator.platform")
print("navigator.vendor:", vendor)
print("navigator.platform:", platform)

webgl_info = driver.execute_script("""
   const canvas = document.createElement('canvas');
   const gl = canvas.getContext('webgl') || canvas.getContext('experimental-webgl');
   const debugInfo = gl.getExtension('WEBGL_debug_renderer_info');
   return {
     vendor: gl.getParameter(debugInfo.UNMASKED_VENDOR_WEBGL),
     renderer: gl.getParameter(debugInfo.UNMASKED_RENDERER_WEBGL)
   };
""")
print("WebGL vendor:", webgl_info["vendor"])
print("WebGL renderer:", webgl_info["renderer"])

print("Page title:", driver.title)
driver.quit()

Running the above script will also print out all of the stealth settings - great for debugging.

Feature Evasion method
languages Matches the language settings of real users in their browsers
vendor Makes it say “Google Inc.” instead of “unknown” or “null”
platform Mimics real OS platforms like Windows or Mac
webgl_vendor Helps avoid graphics-based detection
renderer Makes sure your browser renders pages like a real one
fix_hairline Fixes tiny visual rendering differences that bots usually miss

Together, this makes your automated browser indistinguishable from a real one.

Step 4: Start Scraping

Once you're all set up with Selenium Stealth, you can interact with websites as usual:

from selenium.webdriver.common.by import By

search_box = driver.find_element(By.NAME, "q")
search_box.send_keys("selenium stealth python")
search_box.submit()

Selenium Stealth will type and submit like a real user. Your browser will load content normally without tripping security filters.

Step 5: Human-Like Behavior

Even with Stealth, websites monitor how you interact. So:

  • Add random time.sleep() delays between actions
  • Scroll the page occasionally
  • Avoid instantly clicking everything
  • Rotate user-agents and IPs for large-scale scraping

Here’s an example of adding delay and scrolling:

import random

# Random wait between 2 to 5 seconds
time.sleep(random.uniform(2, 5))

# Scroll slowly
driver.execute_script("window.scrollTo(0, document.body.scrollHeight/2);")

Configuring Selenium WebDriver Options

Getting Selenium WebDriver to run smoothly is all about configuring your browser so it mimics human behavior, reduces detection, and runs reliably, even at scale.

Selenium webdriver Options let you customize your browser's behavior before it launches. You can:

  • Run the browser headlessly (without opening a window)
  • Disable notifications and pop-ups
  • Use custom user agents
  • Enable extensions
  • Set download directories
  • Control automation detection

How to Set Selenium WebDriver Options in Python

Here’s a simple setup using Selenium’s Chrome options, starting from selenium import webdriver.

from selenium import webdriver
from selenium.webdriver.chrome.options import Options

options = Options()

# Start configuring
options.add_argument("--disable-notifications")
options.add_argument("--start-maximized")
options.add_argument("--disable-blink-features=AutomationControlled")

driver = webdriver.Chrome(options=options)
driver.get("https://google.com")

Set a Custom User-Agent

Many sites check your user-agent string to verify the browser identity. A default Selenium UA looks suspicious.

options.add_argument("user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 Chrome/120.0.0.0 Safari/537.36")

Set Download Directory

If you're scraping PDFs, images, or CSVs, set a custom download folder:

prefs = {"download.default_directory": "/path/to/your/folder"}
options.add_experimental_option("prefs", prefs)

Disable Browser Automation Banners

You can remove the automation banner as well, although it’s mostly a cosmetic change.

options.add_experimental_option("excludeSwitches", ["enable-automation"])
options.add_experimental_option('useAutomationExtension', False)

Also, combine it with:

driver.execute_script("Object.defineProperty(navigator, 'webdriver', {get: () => undefined})")

Load Chrome Extensions (if needed)

Extensions can provide additional features that can make scraping a lot easier.

options.add_extension('/path/to/extension.crx')

This is useful if you're automating around a login or site that behaves better with ad blockers.

Bonus: Combine Options with undetected-chromedriver

If you’re using undetected-chromedriver, combine your options like this:

import undetected_chromedriver as uc

options = uc.ChromeOptions()
options.add_argument("--disable-blink-features=AutomationControlled")
options.add_argument("--start-maximized")

driver = uc.Chrome(options=options)

This gives you maximum Stealth, especially when paired with selenium-stealth.

When your Selenium webdriver gets blocked, rotating proxies and updating Stealth settings are the way to recover.

Advanced Selenium Stealth Techniques

Once you've got the basics of Selenium Stealth down, it's time to level up. Many websites today use advanced anti-bot measures that go beyond the surface.

To beat them, you need deeper tactics that simulate real users as closely as possible.

Here’s a breakdown of using Selenium Stealth techniques and how you can implement them to stay undetected for longer.

1. Modify "navigator.webdriver" Property

One of the most obvious red flags while using Selenium Stealth is the navigator.webdriver property. If it returns true, you’ll be detected.

Override it before running any navigation function, like this:

driver.execute_script("Object.defineProperty(navigator, 'webdriver', {get: () => undefined})")

2. Mimic Human Behavior with Random Delays

Bots are usually fast, and humans are not. So, make sure to add randomized sleep delays to your actions:

import time
import random

def human_delay(min_sec=1, max_sec=3):
    time.sleep(random.uniform(min_sec, max_sec))

3. Simulate Mouse Movements and Scrolls

Websites watch for mouse activity. You can simulate some basic actions:

from selenium.webdriver.common.action_chains import ActionChains

action = ActionChains(driver)
action.move_by_offset(100, 100).perform()  # move mouse
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")

4. Avoid Headless Mode When Possible

Many detection systems track headless browsers. Unless necessary, try running in full mode:

options.add_argument("--headless=new")  # Use only if you must

5. Use Residential or Rotating Proxies

Sites often detect bots based on IP reputation. Avoid using datacenter proxies for Stealth tasks. Instead:

  • Use residential proxies (more human-like)
  • Rotate IPs frequently (especially per session or request)
  • Sync proxy rotation with user-agent rotation

Example with Selenium:

options.add_argument('--proxy-server=http://your_proxy_here')

Note that it will only work if the proxy has no authentication (or your IP address is whitelisted). If you need more control over proxy integration, use a library like Selenium-Wire.

Alternatives to Using Selenium Stealth Library

Whether you're scraping websites with strong anti-bot detection or simply want more control over Stealth techniques, here are some powerful alternatives to using Selenium Stealth that you can try.

1. Undetected-Chromedriver (uCD)

A drop-in replacement for webdriver.Chrome() that automatically patches detection points in Chromium and ChromeDriver.

It's best for users who want a ready-made Stealth browser without manually patching everything.

import undetected_chromedriver as uc
driver = uc.Chrome()

2. Puppeteer with Puppeteer-Extra (Node.js)

A powerful headless browser automation library for Node.js, often used with the puppeteer-extra-plugin-stealth package.

This one is the best choice for developers comfortable with JavaScript or Node.js looking for an alternative outside of Python.

3. Playwright (Python, Node.js, Java, .NET)

A modern alternative to Puppeteer, created by Microsoft, with better multi-browser support and more Stealth-friendly options.

Ideal for cross-browser testing and scraping that requires better control and speed than Selenium.

Ethical Considerations for Web Scraping

Web scraping can be incredibly useful, but it’s important to do it responsibly. Always respect a website’s terms of service—check their robots.txt file to see what they allow or block.

Avoid scraping personal or sensitive information, and don’t overload a website’s server with too many requests in a short time.

Give credit if you’re using the data for public research or publishing, and always be transparent about how the data was collected.

When in doubt, reach out to the site owner for permission or consult with a legal professional. Scraping ethically protects your reputation and helps keep the internet fair and safe.

Common Challenges and Solutions

Here are some common challenges and solutions around web scraping:

1. Website Blocking or Bans

Many websites detect and block scraping bots using IP bans or CAPTCHAs.

Solution: Use rotating proxies, random user agents, and tools like Selenium Stealth or Playwright to mimic human behavior. Add delays between requests to stay under the radar.

2. JavaScript-Rendered Content

Some sites load content dynamically using JavaScript, which traditional scrapers like Requests or BeautifulSoup can’t access.

Solution: Use browser automation tools like Selenium, Playwright, or Puppeteer to render pages like a real browser.

3. Changing Website Structure

If the website changes its layout or HTML, your scraper might break.

Solution: Write flexible code using class or tag patterns and set up alerts or error logging to catch issues early. Regularly maintain and test your scraper.

4. CAPTCHA and Bot Protection Systems

Systems like Cloudflare or reCAPTCHA can block access altogether.

Solution: Use Stealth tools, browser automation, or solve captchas using third-party services (ethically and legally, if allowed).

Best Practices for Web Scraping with Selenium Stealth

Follow these best practices to get the best out of your web scraping by using Selenium Stealth .

1. Always Respect "robots.txt" and the Site’s Terms of Service

Before scraping any website, check its robots.txt file (just add /robots.txt at the end of the site’s URL). This file tells you which pages are off-limits to bots. If a page is disallowed, scraping it can be considered unethical - or even illegal, depending on local laws.

Also, read the website’s Terms of Service. Some websites clearly state that they don’t allow scraping. Ignoring these rules may get you banned or lead to legal trouble.

2. Don’t Overload the Website’s Server

Websites are built to handle a normal amount of traffic from users, not dozens of requests per second from a bot.

To avoid slowing down or crashing the website:

  • Add delays between requests (2–5 seconds is usually safe).
  • Use time.sleep() in Python or similar methods in other languages.
  • Avoid downloading huge amounts of data in one go.

3. Rotate User Agents and IP Addresses

Most websites can tell if the same browser or IP is requesting pages over and over. When this happens, they might block you.

To avoid that:

  • Use a user agent list to rotate between different browser types (like Chrome, Firefox, and Safari).
  • Use proxy services or VPNs that give you a different IP address for each session.
  • Libraries like fake_useragent help automate this.

4. Handle JavaScript-Rendered Content Smartly

Some websites don’t show all their content in the raw HTML. Instead, they use JavaScript to load data after the page has already loaded. In this case:

  • Use tools like Selenium, Playwright, or Puppeteer, which act like real browsers and wait for the content to appear.
  • Add wait times or use commands like wait_for_selector() to make sure the page is fully loaded before extracting anything.

Conclusion

By respecting website rules, handling data carefully, and building scrapers that mimic real users, you avoid getting blocked and ensure cleaner results. Just take your time to plan, test, and adjust your scraper as needed. And remember: the goal isn’t just to collect data—it’s to collect it ethically, efficiently, and effectively, while also aiming to bypass anti-bot measures.

FAQ

Is web scraping with Selenium legal?

Web scraping is legal if you're scraping publicly available data and following the site's terms of service. Avoid scraping login-protected or copyrighted content without permission.

Why do websites block web scrapers?

Websites block scrapers to prevent server overload, protect copyrighted content, or stop bots from extracting competitive data, implementing various anti-bot detection measures. They detect bots through patterns like rapid requests or identical headers, which tools like Selenium Stealth help hide.

What’s the best way to avoid being blocked while scraping?

Use rotating IP addresses, random user-agents, delays between requests, and tools like Selenium Stealth. Also, respect site rules and avoid scraping sensitive or restricted pages. Being patient and responsible is key to long-term scraping success.

Create Account

Author

Justas Vitaitis

Senior Software Engineer

Justas is a Senior Software Engineer with over a decade of proven expertise. He currently holds a crucial role in IPRoyal’s development team, regularly demonstrating his profound expertise in the Go programming language, contributing significantly to the company’s technological evolution. Justas is pivotal in maintaining our proxy network, serving as the authority on all aspects of proxies. Beyond coding, Justas is a passionate travel enthusiast and automotive aficionado, seamlessly blending his tech finesse with a passion for exploration.

Learn More About Justas Vitaitis
Share on

Related articles