Top User Agents for Web Scraping in 2025 [Updated List]
Tutorials

Eugenijus Denisov
Key Takeaways
-
Most popular user agents vary by device category, but the Windows devices with the Chrome browser are often cited as the most common.
-
User agents identify your browser and device that is making the request, and must look natural if you want to web scrape without disruptions.
-
Default user agent headers can be changed in a Python script by defining a list and passing them to the requests.get() function.
Your web scraper might be blocked just after visiting the website without any obvious reason. This might result in an increased number of CAPTCHA challenges, redirects to the block page, or an HTTP 403 Forbidden error.
A common reason for these issues is when the website knows you’re not using a traditional web browser based on the user agent of your scraper. Changing it to one of the more popular user agents is the first step to resolution.
Which User Agents Are Commonly Used for Scraping Websites?
If you need to pick a user agent string for your web scraping script, the best bet is just to take one from the most popular user agent list. It will blend with the other traffic sent to the website and not stand out.
As of the time of writing, the most common user agent is the following:
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/141.0.0.0 Safari/537.36
It's the user agent header of the latest Chrome browser running on Windows (starting from Windows 11, browsers don't discern between Windows versions in the browser).
One user agent might not be enough, as it's recommended to rotate a list of common agents, but you’ll have to implement intelligent rotation. In some cases, you might also need user agent strings from specific devices. So, it's best to have device-specific user agents ready.
Common Windows Desktop User Agents
- Chrome 134.0.0, Windows
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36
- Edge 134.0.3124, Windows 10/11
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36 Edg/134.0.3124.85
Common MacOS Desktop User Agents
- Chrome 134.0.0, Mac OS X 10.15.7
Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36
- Edge 134.0.3124, Mac OS X 10.15.7
Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36 Edg/134.0.3124.85
Common iPhone User Agents
- Chrome Mobile iOS 134.0.6998, iOS 17.7
Mozilla/5.0 (iPhone; CPU iPhone OS 17_7 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) CriOS/134.0.6998.99 Mobile/15E148 Safari/604.1
- Mobile Safari 18.3, iOS 17.7.2
Mozilla/5.0 (iPhone; CPU iPhone OS 17_7_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/18.3 Mobile/15E148 Safari/604.1
Common Android User Agents
- Chrome Mobile 134.0.6998, Android 10
Mozilla/5.0 (Linux; Android 10; K) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.6998.135 Mobile Safari/537.36
- Edge Mobile 134.0.3124, Android 10
Mozilla/5.0 (Linux; Android 10; HD1913) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.6998.135 Mobile Safari/537.36 EdgA/134.0.3124.68
Alternatively, instead of choosing someone else's user agent, you can use your own. Simply enter “What's my user agent?” in Google, and it will display your user agent string, which you can set for your script. It should look natural enough to bypass most scraping restrictions.
Where Can I Find a Comprehensive List of User Agents for Web Scraping Purposes?
The most popular user agent strings are frequently changing as new devices and web browsers come and go. You will need to update and change your list of used user agents frequently.
A few dynamically updated sources in blogs and dedicated user agent lists are popular in the web scraping community.
What Is a User Agent, and Why Is It Important for Web Scraping?
To actually use the popular user agents in your scripts, you need to understand their role. When you connect to a server through a browser or an HTTP client like the Requests library in Python, your requests include HTTP headers, which contain all kinds of information about your device.
Among other things, they also include a user agent header , which identifies your browser or, in the case of web scraping, the script that is making the request.
We can see how it looks in action by using the following code. It uses the Requests library to send a request from your device and then prints out the HTTP headers of that request in the console.
import requests
response = requests.get("https://example.com/")
print(response.request.headers)
It should print out something like this:
{'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate, br, zstd', 'Accept': '*/*', 'Connection': 'keep-alive'}
The relevant header in this case is the first one. In it, the Requests library tells the server that you're using it, and you might want to hide it when web scraping. HTTP headers can also reveal that you're using a proxy. For a quick (and code-free) check, visit our proxy headers checker .
The user-agent header in web browsers looks different from the default provided by the Requests library. Inside, it lists the version of the web browser, the operating system, and some other tidbits that can be used to determine what kind of content to serve the user.
For example, here's the user agent string for the latest version of Chrome (at the time of writing this article) running on Windows:
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/141.0.0.0 Safari/537.36
It might seem odd that the user agent for Chrome starts with Mozilla/5.0, but this is due to specific legacy compatibility reasons rooted in the early developments of web browsers. The user agent strings of mobile browsers also have a somewhat similar structure.
Mozilla/5.0 (Linux; Android 10; K) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Mobile Safari/537.36
When web scraping, you want to conceal the fact that you're using the Python Requests library or other tool. You also want your user agent string to be one used by common browsers, not to stand out.
How to Change Your User Agent?
Most of the HTTP client applications used in web scraping let you easily change the contents of the user agent string, mimicking a real browser. We'll use the Requests library as an example.
After setting up the Requests library for a web scraper, you can create a new headers variable that will contain a dictionary. The dictionary needs just one entry - the user agent header you need.
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/141.0.0.0 Safari/537.36'}
Now you can pass the headers variable to the requests.get() function.
import requests
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/141.0.0.0 Safari/537.36'}
response = requests.get("https://httpbin.org/headers", headers=headers)
print(response.request.headers)
This will include the user agent header in the request, overwriting the default header. The request will now have a user agent header that matches that of a standard Chrome browser.
{'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/141.0.0.0 Safari/537.36', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
Are There Any User Agents Specifically Designed for Mobile Scraping?
When you need to scrape mobile content, you must ensure you are using mobile IPs and a mobile user agent. Otherwise, the mobile content you seek might not be available, or your requests might be flagged and blocked.
Just like desktop user agents, mobile ones are different, and some are more popular than others. Here are some commonly used ones:
- Chrome Mobile, Android
Mozilla/5.0 (Linux; Android 10; K) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Mobile Safari/537.3
- Mobile Safari, iOS
Mozilla/5.0 (iPhone; CPU iPhone OS 18_3_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/18.3.1 Mobile/15E148 Safari/604.
- Chrome Mobile iOS, iOS
Mozilla/5.0 (iPhone; CPU iPhone OS 18_3_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) CriOS/134.0.6998.99 Mobile/15E148 Safari/604.
How Can Using Different User Agents Help in Bypassing Anti-scraping Measures?
User agent headers that match popular devices and browsers allow you to blend in with common website traffic. Without such a change, your scrapers are easier to detect. Some websites might even block default user agent headers.
For example, Reddit blocks requests that have the HTTP client headers default for Python libraries like Requests. Changing the user agent to almost anything else gives you more possibilities to scrape websites undetected.
Using a different user agent isn't a silver bullet; otherwise, nobody would get IP banned after changing them and using browser automation libraries like Puppeteer correctly. In reality, websites monitor much more than suspicious user agent strings.
Traffic patterns, user actions, IP address usage, and other factors can give away that you're web scraping. While user agent headers are one of the first things monitored, just one small change in headers won't cut it.
At the very least, a large-scale web scraping project will also need rotating proxies . They act as middlemen between you and your server, hiding the original IP address of your request. By rotating proxies throughout the scraping session, each request will have a different IP address, so you'll appear as multiple visitors.
IPRoyal offers residential proxies with over thirty million ethically sourced, unique IP addresses. We can provide you with everything you need to scale your web scraping efforts beyond simple scripts.
How to Avoid Bot Detection Using User Agents
User agent and IP address check is the first line of defense you must overcome when web scraping. Inspecting them takes seconds since they are included in the first request header you send. Some bot detection methods take longer while combining user agents with behavioral analysis or other signals.
Whether user-agent-related restrictions appear immediately or not, it's essential to spoof user-agent strings correctly in your scraper's requests.
- Use realistic and up-to-date user agents. Choose user agents from popular lists that people are actually using while browsing your target websites.
- Rotate user agents. Just as proxy IP addresses, user agents must be rotated. A standard recommendation is to rotate them every one to five requests, but it varies by use case.
- Mix device types and browsers. While mobile devices and Chrome browsers are most popular among visitors, you should mix your requests to reduce the chance of detection.
- Randomize related headers. When rotating user agent headers, don't forget to adjust other request headers , such as Accept, Accept-Language, or Referer, which might also be checked by anti-bot detection methods.
- Test your user agent strings. Use header echoing services, like https://httpbin.org/headers , to confirm whether your user agent and other headers are shown as intended.
The easiest way to manage multiple user agents in a Python script is to create a list, as shown below.
user_agents = [
# Chrome Mobile, Android
'Mozilla/5.0 (Linux; Android 10; K) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Mobile Safari/537.3',
# Chrome, Windows
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36',
# Chrome, Mac
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36'
]
Now it's up to you how to implement user agent rotation. One of the easiest ways is to use Python's random number generator to select one of the agents in the list for each request.
import random
import requests
user_agents = [
'Mozilla/5.0 (Linux; Android 10; K) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Mobile Safari/537.3',
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36',
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36'
]
url = "https://httpbin.org"
headers = {"User-Agent": random.choice(user_agents)}
response = requests.get(url, headers=headers)
print(f"Using User-Agent: {headers['User-Agent']}")
print(f"Response status: {response.status_code}")
Alternatively, you can use dedicated Python libraries, such as fake-useragent , to manage and even generate unique user agents when needed. Yet, this might not be an optimal solution if you don't want to rely on the availability and updates of third parties.
FAQ
Can using a user agent help in avoiding IP blocking while scraping?
While using a common user agent can help avoid detection and blocks, it won't do anything if your IP has already been flagged. Using the same IP for all requests will create a pattern that can lead to an IP ban, with or without proper user agents. The use of rotating proxies for large-scale web scraping remains a must.
Are there any limitations or restrictions when using user agents for scraping?
You can put virtually any text string in the user agent header, but keep in mind that using headers of a different application might be against the terms of service of a website. Trying to hide your user agent might lead to an IP ban or account suspension, but this can be solved with rotating proxies.
How can I change the user agent in popular web scraping frameworks?
All popular web scraping frameworks have a way to manipulate the headers of requests to overwrite the default headers used. Often, such as in the case of Python, it involves creating a list of user agents and setting rules for when your scraper chooses one.