50% OFF Residential Proxies for 9 months — use code IPR50 at checkout

Get The Deal

In This Article

Back to blog

Top User Agent List for Scraping in 2025 (Desktop & Mobile)

Tutorials

Eugenijus Denisov

Last updated - ‐ 9 min read

When you start scraping the web, you’ll notice that scripts are sometimes blocked seemingly without reason. Somehow, the website knows you’re not using a real browser, and sensing your intentions, it blocks your access.

There’s an easy way to tackle this — by changing your user agent. Keep reading to find out what a user agent is, which are the most common user agents, and how you can adjust your code to mimic the user agent of a browser.

Latest User Agents for 2025

If you need to pick a user agent string for your scraping script, the best bet is to just take the most commonly used one. It will blend with the other traffic sent to the website and not stand out.

Currently, the most common user agent is the following:

Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/138.0.0.0 Safari/537.36

It's the user agent of the latest Chrome browser running on Windows (starting from Windows 11, browsers don't discern between Windows versions in the browser).

While it might seem odd that there are mentions of other browsers in this user agent as well, there is a solid reason behind that .

Below is a user agent list for 2025, grouped by browser and OS. You can use this list to help rotate user agents for lower chances of detection:

Browser / Operating system Example user agent
Chrome / Windows Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/139.0.0.0 Safari/537.36
Chrome / macOS Mozilla/5.0 (Macintosh; Intel Mac OS X 15_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/139.0.0.0 Safari/537.36
Chrome / Linux Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/{version} Safari/537.36
Chrome / Android Mozilla/5.0 (Linux; Android 10; K) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/139.0.7258.63 Mobile Safari/537.36
Chrome / iOS Mozilla/5.0 (iPhone; CPU iPhone OS 18_6 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) CriOS/139.0.7258.76 Mobile/15E148 Safari/604.1
Firefox / Windows Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:141.0) Gecko/20100101 Firefox/141.0
Firefox / macOS Mozilla/5.0 (Macintosh; Intel Mac OS X 15_6; rv:141.0) Gecko/20100101 Firefox/141.0
Firefox / Linux Mozilla/5.0 (X11; Linux x86_64; rv:10.0) Gecko/20100101 Firefox/10.0
Firefox / Android Mozilla/5.0 (Android 16; Mobile; rv:141.0) Gecko/141.0 Firefox/141.0
Firefox / iOS Mozilla/5.0 (iPhone; CPU iPhone OS 15_6 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) FxiOS/141.0 Mobile/15E148 Safari/605.1.15
Edge / Windows Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36 Edg/120.0.0.0
Edge / macOS Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/139.0.0.0 Safari/537.36 Edg/139.0.3405.86
Edge / Linux Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/137.0.0.0 Safari/537.36 Edg/137.0.3296.65
Edge / Android Mozilla/5.0 (Linux; Android 10; HD1913) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/138.0.7204.46 Mobile Safari/537.36 EdgA/137.0.3296.92
Edge / iOS Mozilla/5.0 (iPhone; CPU iPhone OS 18_6 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/18.0 EdgiOS/139.3405.86 Mobile/15E148 Safari/605.1.15
Safari / Windows No native versions for Windows
Safari / macOS Mozilla/5.0 (Macintosh; Intel Mac OS X 15_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/18.5 Safari/605.1.15
Safari / Linux No native versions for Linux
Safari / Android No native versions for Android
Safari / iOS Mozilla/5.0 (iPhone; CPU iPhone OS 18_6 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/18.4 Mobile/15E148 Safari/604.1

Running the most popular user agents might seem counterintuitive, but it works best against various detection methods. You want your scraper or bot to look like a regular user, so the most popular user agents will blend in the best.

Keep in mind that the latest browser changes, so it's a good idea to research commonly used user agents and update your headers time by time.

Alternatively, you can just get the user agent string of your browser by googling the keyphrase “What's my user agent?”. Google should display your user agent string, which you can then set as the user agent for your script. Given that it's copied from a real browser, it should look natural enough to bypass most scraping restrictions.

Where Can I Find a Comprehensive List of User Agents for Web Scraping Purposes?

If you require a more detailed list of user agents you can use for web scraping, check out most common user agents . It contains a list of commonly used desktop-based user agents that is continually updated based on data from people visiting the blog.

What Is a User Agent, and Why Is It Important for Web Scraping?

When you send a request to a server through a browser or an HTTP client like the Requests library in Python, the request includes HTTP headers, which contain all kinds of information about this request.

Among other things, they also include a user agent header — a name that lets the server identify what kind of application the request comes from.

To see how it looks, you can use the following code. It uses the Requests library to send a request from your device. Then, it prints out the HTTP headers of that request in the console.

"https://iproyal.com/"
printresponserequestheaders

It should print out something like this:

{'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive', 'Content-Length': '11', 'Content-Type': 'application/x-www-form-urlencoded'}

The relevant header in this case is the first one. As you see, the Requests library tells the server that you're using it.

HTTP headers can also reveal you're using a proxy. For a quick (and code-free) check, visit our proxy headers checker .

The user agent of web browsers looks slightly different from the user agent of standalone applications. Inside, it lists the version of the web browser, the operating system, and some other tidbits that can be used to determine what kind of content to serve the user.

For example, here's the user agent string for the latest version of Chrome (at the time of writing this article) running on Windows:

Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/139.0.0.0 Safari/537.36

The user agent strings of desktop and mobile browsers also differ. For example, here's a user agent of a mobile browser:

Mozilla/5.0 (Linux; Android 10; K) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/139.0.7258.63 Mobile Safari/537.36

By changing the user agent string from the default one to the one that is used by browsers, you can make the website think that the request originates from a real browser and hide the fact that you're doing web scraping.

Also, you can rotate user agents to make browsers think you’re a real user.

Ready to get started?
Register now

How to Change Your User Agent?

Most of the HTTP client applications used in web scraping let you easily change the contents of the user agent string and, in that way, mimic using a real browser.

In this part, you'll learn how to do it with Requests , the most popular Python HTTP client library.

Let's say that you already have some code using Requests up and running.

import requests

response = requests.get("https://iproyal.com/")
print(response.request.headers)

To change the user agent that Requests uses, create a new headers variable that will contain a dictionary. The dictionary needs just one entry — the user agent header.

headers = {
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/139.0.0.0 Safari/537.36"
}

Now you can pass the headers variable to the requests.get() function.

response = requests.get("https://iproyal.com/", headers=headers)
print(response.headers)

It will include the user agent in the request, overwriting the default header. The request will now have a user agent that matches that of the Chrome browser.

{'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/139.0.0.0 Safari/537.36', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}

Are There Any User Agents Specifically Designed for Mobile Scraping?

Just like desktop browsers, mobile browsers also have user agents that are specific to this browser and the operating system.

Here are some commonly used ones:

Browser and OS Example user agent
Chrome on Android Mozilla/5.0 (Linux; Android 10; K) AppleWebKit/537.36... Chrome/136.0.0.0 Mobile Safari/537.36
iPhone iOS Safari Mozilla/5.0 (iPhone; CPU iPhone OS 18_2_1 like Mac OS X) AppleWebKit/605.1.15... Version/18.2 Mobile Safari/604.1
iPad Edge Mobile: Mozilla/5.0 (iPad; CPU OS 18_5_0 like Mac OS X) AppleWebKit/605.1.15... EdgiOS/139.0.3405.86 Version/18.0 Mobile Safari/604.1
Firefox iOS Mozilla/5.0 (iPhone; CPU iPhone OS 18_6_1 like Mac OS X) AppleWebKit/605.1.15... FxiOS/141.1 Mobile/15E148 Safari/605.1.15

How Can Using Different User Agents Help in Bypassing Anti-scraping Measures?

By using a user agent that matches that of a browser, you can hide from preventative solutions that try to block traffic that comes from simple scraping scripts.

For example, Reddit can block requests which have the default HTTP client headers. Changing the user agent to virtually anything else gives you more possibilities to scrape the website.

In reality, websites monitor traffic patterns, user actions, IP address usage, and many other things to discern whether you're doing web scraping. It's essential to always stay on the top of your game, and just one small change in headers won't cut it.

One of the best ways to support large-scale scraping is through the use of rotating proxies . Proxies act as middlemen between you and your server, hiding the original IP address of your request.

When you rotate user agents throughout the scraping session, you can hide the fact that there is scraping happening at all — each request will have a different IP address, so it will look like there are just multiple users visiting the site.

FAQ

Can using a user agent help in avoiding IP blocking while scraping?

While using a user agent can help you avoid detection and IP blocking, it won't do anything if your IP has already been flagged. Using the same IP for all requests will create a pattern that can lead to an IP ban with and without the use of proper user agents. For this reason, the use of rotating proxies while doing any large-scale scraping is a must.

Are there any limitations or restrictions when using user agents for scraping?

You can put virtually any text string in the user agent as you wish, so there are no limitations. But you have to keep in mind that using a user agent of a different application might be against the terms of service of a website. Therefore, trying to hide your user agent might lead to repercussions such as an IP ban or account suspension. This is easy to solve with rotating proxies and/or multiple accounts.

How can I change the user agent in popular web scraping frameworks?

All popular scraping frameworks have a way to manipulate the headers of requests. All you need to do is add a custom User-Agent header to the request, which will then overwrite the default one used by the framework.

How to make your own user agent?

Start by picking a real browser UA (like Chrome/139 on Windows). Then tweak it to insert your app name or version. But be aware that strange strings may raise flags. The best user agent must still look realistic to succeed.

How do I find my user agent?

Open your browser and search for “what is my user agent”. It will show your current UA string.

How to use a random user agent?

To avoid detection, you can rotate user agents. Make a list of UA strings, then randomly pick one per request in your code.

Create Account
Share on
Article by IPRoyal
Meet our writers
Data News in Your Inbox

No spam whatsoever, just pure data gathering news, trending topics and useful links. Unsubscribe anytime.

No spam. Unsubscribe anytime.

Related articles