In This Article

Back to blog

How to Use Undetected ChromeDriver in Python Selenium

Selenium

Vilius Dumcius

Last updated - ‐ 10 min read

Key Takeaways

  • Undetected ChromeDriver modifies the standard Selenium driver by removing automation flags and masking browser fingerprints to help web scraping scripts bypass anti-bot detection systems.

  • The library serves as an accessible drop-in replacement for standard Selenium that patches the driver to remove bot-like signatures, though developers must still script human-like session behavior.

  • To maximize scraping success, users should combine the driver with additional evasion techniques like rotating proxies, custom user agents, and human-like interaction patterns.

Selenium is a popular library for web scraping and browser automation. By default, Selenium uses various regular browser drivers such as the Google Chrome driver. They’re not easy to detect, but many websites have caught on and are now better at finding out if Selenium is being used to access them.

Most detection occurs because standard drivers contain specific browser variables and browser automation flags that explicitly signal the presence of a bot. The Undetected ChromeDriver is a separate library that attempts to plug most of these issues. As such, the Undetected ChromeDriver is much less likely to get banned or receive a CAPTCHA.

Using Undetected ChromeDriver for web scraping with Selenium can greatly improve data collection performance and even reduce costs by reducing the amount of proxies required. As a powerful free library, Undetected ChromeDriver is a highly recommended tool for Selenium-based web scraping projects aiming to bypass modern detection.

How Undetected ChromeDriver Works

The standard Chrome driver has a variable called cdc_ in its code. Websites check for the string to see if it’s a bot trying to access the site. Undetected ChromeDriver changes these internal names so the Chromium-based browser looks like a normal one used by a person. It also removes the navigator.webdriver flag that tells sites a script is running the show.

How Browser Fingerprinting Signals Are Reduced

Every browser has a unique fingerprint based on its screen size, fonts, and hardware. Undetected ChromeDriver removes the specific automation signatures that signal a bot, allowing your browser to appear as a standard user profile.

Removing driver-level restrictions allows the browser to display its natural features without the bot label that triggers immediate IP blocks. It helps your Python script blend in with thousands of regular users.

How Cookies and Sessions Are Handled

It allows you to use persistent Chrome profiles, which store cookies and sessions just like a standard browser would. You can save your login state or site preferences in a specific folder to use them again later. Keeping the same session makes your movement on a site look more natural, which prevents the site from seeing every page visit as a brand-new, suspicious user.

Why These Techniques Help Bypass Some Anti-Bot Services

By fixing leaks and matching human behavior, ChromeDriver avoids triggering alarms. Sites often use IP blocking if they see too many requests from an obvious bot. These methods lower the chance of a block because the site thinks a real person is clicking around. It makes your web scraping much more reliable over long periods.

Installation and Setup

Like with any external Python library, you’ll have to install it. Open up the Terminal and type in:

pip install undetected-chromedriver setuptools

You don’t need to install Selenium separately, as it’s automatically included as a dependency when you install Undetected ChromeDriver.

Undetected ChromeDriver automatically downloads and patches the correct version of ChromeDriver for you, saving you from the manual setup required in older Selenium workflows.

Once your package has been installed, we can import the library:

import undetected_chromedriver as uc

Ready to get started?
Register now

Undetected ChromeDriver Usage Guide

Sending a GET Request

GET requests are the bread and butter of any Python script that involves web scraping. Sending a GET request with Undetected ChromeDriver is nearly identical to Selenium:

import undetected_chromedriver as uc

def open_webpage(url):
    options = uc.ChromeOptions()
    
    with uc.Chrome(options=options) as driver:
        driver.get(url)
        
        print(f"Successfully navigated to {url}")
        input("Press Enter to close the browser...")

if __name__ == '__main__':
    open_webpage('https://www.coinfaucet.eu')

We start by defining a function that will simply start and run our Undetected ChromeDriver. We add an argument to modify the URL when the function is called.

Our first step is to create a browser instance, which later sends a GET request to the URL. We’d like to exit the browser after reaching the web page. However, that would automatically close the instance nearly instantly.

For learning purposes, we add an input function so we can see the website until we press “Enter” in the terminal. Once that is pressed, the Undetected ChromeDriver instance will quit.

Finally, we use the Coin Faucet website as it employs anti-bot protections like Cloudflare, which typically block standard Selenium drivers but often allow Undetected ChromeDriver. Regular ChromeDrivers may have issues accessing it, so it’s perfect for testing out if Undetected ChromeDriver bypasses the anti-bot features.

Storing Website Content

When web scraping, you’ll usually need to extract the rendered HTML source code from the browser after the page has fully loaded. After that, you’ll use a parsing library like BeautifulSoup 4 to extract all of the necessary information.

import undetected_chromedriver as uc

def get_html(url):
    with uc.Chrome() as driver:
        driver.get(url)
        html_content = driver.page_source
        return html_content

if __name__ == '__main__':
    content = get_html('https://www.coinfaucet.eu')
    print(content[:500]) # Print first 500 chars to verify

Most of the function remains the same. However, we now store the page source into “html_content” and return it after the function is finished. We can check if it has been stored correctly by running a print command at the end.

Complete Web Scraping Example

It shows how to grab real data from a page and save it as a file. We’ll use a Python environment to run it and export the results to JSON:

import undetected_chromedriver as uc
from selenium.webdriver.common.by import By
import json
import time

def scrape_titles(url):
    driver = uc.Chrome()
    
    try:
        driver.get(url)
        time.sleep(5) 
        
        elements = driver.find_elements(By.CSS_SELECTOR, 'h1, h2')
        
        scraped_data = []
        for el in elements:
            text = el.text.strip()
            if text:
                scraped_data.append({
                    "tag": el.tag_name,
                    "text": text
                })
        
        with open('output.json', 'w', encoding='utf-8') as f:
            json.dump(scraped_data, f, indent=4)
            
        print(f"Successfully saved {len(scraped_data)} items to output.json.")
        
    except Exception as e:
        print(f"An error occurred: {e}")
        
    finally:
        driver.quit()

if __name__ == '__main__':
    scrape_titles('https://iproyal.com')

Changing Undetected ChromeDriver Settings

So far, we’ve run Undetected ChromeDriver with its default settings. They’re highly optimized to evade anti-bot systems, so it’s usually a good idea to keep them as they are. Sometimes, however, you may need different settings to optimize your web scraping project.

Take note that you should experiment with anti-bot systems and various settings. Some settings may leak information, tripping up anti-bot systems, while others may have no effect.

import undetected_chromedriver as uc

def open_webpage(url):
    
    options = uc.ChromeOptions()
    
    driver = uc.Chrome(options=options, headless=True)
    
    driver = uc.Chrome(options=options)

    driver.get(url)

    html_content = driver.page_source

    input("Press Enter to continue...")

    driver.quit()

    return html_content

html_data = open_webpage('https://www.coinfaucet.eu')
print(html_data)

We’ve now accessed Undetected ChromeDriver’s options and set the driver to headless mode. In headless mode, the browser runs in the background. Note that some anti-bot systems are more aggressive toward headless browsers, so additional stealth settings may be required.

There are plenty of other settings, such as the ability to disable image loading, change resolution, etc. Modifying user agents can help you blend in, but ensure the agent matches the browser's hardware signatures to avoid triggering inconsistency flags.

User agents are part of the HTTP protocol information sent when issuing a GET request. They describe various features of your device, such as the browser version, OS, and many other things. Modifying user agents can help you avoid blocks if done correctly:

import undetected_chromedriver as uc

def open_webpage_headless(url):
    options = uc.ChromeOptions()
    
    ua = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko)
Chrome/119.0.0.0 Safari/537.36"
    options.add_argument(f'--user-agent={ua}')

    driver = uc.Chrome(options=options, headless=True)
    
    try:
        driver.get(url)
        print("Active UA:", driver.execute_script("return navigator.userAgent;"))
        return driver.page_source
    finally:
        driver.quit()

There are a few things we’ve modified. First, we added a new option to Undetected ChromeDriver, which provides our custom user agent . Since user agents are generally invisible to our execution, it may be helpful to verify that we indeed did not make any mistakes, especially if the code gets more complicated with various lists or randomization features.

So, we execute a specific script that extracts the user agent and prints it out.

There are plenty of other options available; for example, you can set a specific Chrome version. It can help reduce blocks if you pick a more popular Chrome version that’s more ubiquitous among regular internet users.

You could also specify your own ChromeDriver binary. However, that’s only useful if you have made your own modifications to Undetected ChromeDriver. Otherwise, you’ll just have the regular driver without any of the benefits.

Setting Up Proxies

Finally, Undetected ChromeDriver won’t be able to evade all anti-bot mechanisms. Some of them are not even based on your browser, but on your online activity. For example, sending too many requests will trigger many anti-bot systems, regardless of your browser.

So, proxies are required to reduce the likelihood of getting banned or even entirely avoid anti-bot systems. Luckily, Undetected ChromeDriver has an extremely easy way to for proxy integration:

import undetected_chromedriver as uc

def open_webpage_with_proxy(url, proxy_url):
    options = uc.ChromeOptions()
    
    options.add_argument(f'--proxy-server={proxy_url}')

    driver = uc.Chrome(options=options)
    
    try:
        driver.get('https://api.ipify.org')
        print(f"Current IP through proxy: {driver.find_element(By.TAG_NAME, 'body').text}")
        
        driver.get(url)
        return driver.page_source
    finally:
        driver.quit()

We add another option to the list to set our proxy server settings. Note that the code will seemingly execute correctly even if you don’t replace the placeholder proxy. However, Undetected ChromeDriver is downloading the HTML of the error page.

Best Practices Checklist

  • Random but bounded delays. Always wait between actions so you don’t look like a bot.
  • Scrolling and interaction patterns. Move the page up and down to mimic a real person’s actions.
  • Limiting concurrency. Don’t open too many browser windows at the exact same time.
  • Reusing sessions. Use a persistent user_data_dir to save cookies and local storage, ensuring you maintain a consistent login state.
  • Avoiding bulk access to deep pages. Don't try to jump to deep pages without visiting the home page first.

Limitations of Undetected ChromeDriver

  • No IP reputation fix. If your IP address is already on a blacklist, you’ll still be blocked.
  • Headless mode detection. Operating without a GUI often triggers specific virtual environment checks that reveal automation, even when using stealth libraries.
  • Anti-bot evolution. Websites constantly update their code to break tools like ChromeDriver.
  • No guarantee. There’s no promise that it will work continuously on every single website.

Alternatives to Undetected ChromeDriver

Undetected ChromeDriver runs on Selenium, so any library that replaces the latter is a good option. Puppeteer, Pyppeteer, and Playwright are all good options. However, you’ll have to replace a good chunk of the code in this web scraping guide to make them work. Additionally, you'll likely need to use stealth plugins for these libraries, which handle the same browser patching that Undetected ChromeDriver does for Selenium.

If you’re looking for alternatives to bypass anti-bot systems, there are a few things you can do to optimize your pipeline. Start with experimenting with headful and headless modes , as these can often trigger anti-bot systems even with Undetected ChromeDriver.

User agents and proxies are two other strong tools that’ll help you avoid anti-bot systems. With proxies, you can keep switching IP addresses, which can make bans a non-issue. User agents, on the other hand, can help you reduce the likelihood of bans and blocked access.

Finally, you can tinker with your web scraping settings. Sending too many requests, accessing deeply nested pages directly, and several other things can trigger anti-bot systems. So, experiment with the way you’re collecting data.

Quick Rundown

Start by installing Undetected ChromeDriver in Python environment:

pip install undetected-chromedriver

Import the library:

import undetected_chromedriver as uc

Copy and paste the code. Remove features you don’t need:

import undetected_chromedriver as uc
import time

def run_scraper(url):
    options = uc.ChromeOptions()
    
    ua = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/145.0.0.0 Safari/537.36"
    options.add_argument(f'--user-agent={ua}')

    # Proxies (Optional)
    # options.add_argument('--proxy-server=http://your.proxy:port')

    # Headless=True handles the 'new' headless mode automatically
    driver = uc.Chrome(options=options, headless=True)
    
    try:
        driver.get(url)
        time.sleep(5) 
        print("Page Source Length:", len(driver.page_source))
    finally:
        driver.quit()

if __name__ == '__main__':
    run_scraper('https://www.coinfaucet.eu')

FAQ

Is undetected-chromedriver legal to use?

Yes, the tool is legal. While web scraping public data is protected by some court rulings, you must ensure you’re not collecting personal data (violating GDPR/CCPA) or bypassing passwords and paywalls.

Can websites still detect Selenium with UC?

Advanced websites use multi-layered detection, including TLS fingerprinting and behavioral analysis, which can identify web scraping and automation even if the browser flags are hidden.

Should proxies rotate per request or per session?

Rotate per request for web scraping, but for better stability, use sticky sessions that keep the same IP for a few minutes to mimic a natural user session.

Does changing the user-agent help, and when can it hurt?

It helps you look like different devices, but an inconsistent User-Agent (a mobile agent on a desktop hardware profile) is a primary trigger for modern anti-bot filters.

Create Account
Share on
Article by IPRoyal
Meet our writers
Data News in Your Inbox

No spam whatsoever, just pure data gathering news, trending topics and useful links. Unsubscribe anytime.

No spam. Unsubscribe anytime.

Related articles