Twitter Scraper API — Get Tweets, Users, and Search Results in 250 ms
To scrape Twitter (X) at scale, developers use third-party APIs that wrap the public web data layer. TwitterAPI.io is one such API: an HTTP REST + WebSocket service exposing tweet search, user profiles, follower graphs, and real-time streams without an OAuth project queue or a $5,000-a-month minimum.
Sign in with Google, copy an API key, and you can hit /twitter/tweet/advanced_search inside of five minutes. The same advanced-search operators that work on twitter.com (from:, since:, #hashtag, filter:images) work here, plus a stream endpoint that pushes tweets within roughly 250 ms of being posted.
Why a third-party Twitter scraper API?
The official X Developer Platform gates real-time and historical access behind paid tiers — the Basic plan is $100/month with restrictive endpoints, Pro is $5,000/month, and Enterprise starts in five figures. Approval queues stretch from days to weeks. Third-party scraper APIs solve a different problem: predictable per-call pricing, no project review, and no rate-limit cliff at the free / paid boundary.
| Provider | Typical pricing | Minimum plan | Setup time |
|---|---|---|---|
| TwitterAPI.iothis site | $0.0001 per typical request, no minimum | $0 (free credits on signup) | < 5 minutes |
| X Developer Platform (official) | Pro $5,000/month, Enterprise $42,000+/month | $100/mo (Basic, limited endpoints) | Days to weeks (approval queue) |
| Apify Twitter Scraper | $0.40 per 1,000 tweets + compute time | $49/mo + per-run fees | Hours (run config) |
| Bright Data Twitter Scraper | Custom enterprise quote | $500+/mo committed | Sales-led |
Competitor pricing sourced from each vendor's public pricing page; verify the current rate at the source before purchasing. TwitterAPI.io is not affiliated with the listed third parties.
Five endpoints that cover 90% of scraping use cases
Each block below has the real path, a Python + Node sample you can paste straight into a script, a representative response shape, and the gotchas we have seen in production.
Search tweets by keyword, hashtag, or operator
GET /twitter/tweet/advanced_searchThe workhorse endpoint. Supports the full X advanced-search operator vocabulary (from:, to:, lang:, since:, until:, filter:images, $cashtag, #hashtag, exact phrases, OR / AND nesting). One call returns up to 20 tweets with full profile + engagement metrics + next_cursor for pagination.
import requests
API_KEY = "YOUR_API_KEY"
url = "https://api.twitterapi.io/twitter/tweet/advanced_search"
params = {
"query": '#AI from:OpenAI -filter:replies since_time:1715817600',
"queryType": "Latest",
}
headers = {"X-API-Key": API_KEY}
resp = requests.get(url, headers=headers, params=params)
data = resp.json()
for tweet in data["tweets"]:
print(tweet["createdAt"], tweet["text"])
# Pagination — fetch the next page
cursor = data.get("next_cursor")
if cursor:
params["cursor"] = cursorimport fetch from "node-fetch";
const API_KEY = process.env.TWITTERAPI_IO_KEY;
const url = new URL("https://api.twitterapi.io/twitter/tweet/advanced_search");
url.searchParams.set("query", '#AI from:OpenAI -filter:replies since_time:1715817600');
url.searchParams.set("queryType", "Latest");
const res = await fetch(url, { headers: { "X-API-Key": API_KEY } });
const data = await res.json();
for (const tweet of data.tweets) {
console.info(tweet.createdAt, tweet.text);
}{
"tweets": [
{
"id": "1789...",
"createdAt": "...T...",
"text": "We're rolling out a new #AI safety check across all GPT endpoints today.",
"author": {
"id": "4391...",
"userName": "OpenAI",
"name": "OpenAI",
"followers": 4123087
},
"retweetCount": 1240,
"replyCount": 312,
"likeCount": 9824,
"viewCount": 482113,
"lang": "en"
}
],
"next_cursor": "DAADDAABCgABF...",
"has_next_page": true
}- Time-window filters go INSIDE the query string as advanced-search operators (since_time:UNIX / until_time:UNIX), not as separate request params.
- queryType="Latest" gives chronological order; "Top" surfaces high-engagement matches.
- Soft-deleted tweets disappear from results immediately; cache aggressively on your side if you need them.
Get a user's profile, bio, follower count, and verification status
GET /twitter/user/infoSingle-user lookup by handle or numeric ID. Returns the full profile object — display name, bio, location, signup date, follower / following / tweet counts, verification badge, profile image URLs, pinned tweet ID, and protected-flag.
import requests
resp = requests.get(
"https://api.twitterapi.io/twitter/user/info",
headers={"X-API-Key": "YOUR_API_KEY"},
params={"userName": "elonmusk"},
)
profile = resp.json()
print(profile["name"], "—", profile["followers"], "followers")const res = await fetch(
"https://api.twitterapi.io/twitter/user/info?userName=elonmusk",
{ headers: { "X-API-Key": process.env.TWITTERAPI_IO_KEY } }
);
const profile = await res.json();
console.info(profile.name, "—", profile.followers, "followers");{
"id": "44196397",
"userName": "elonmusk",
"name": "Elon Musk",
"description": "...",
"location": "",
"createdAt": "...T...",
"followers": 218417022,
"following": 1054,
"tweetCount": 81234,
"verified": true,
"isProtected": false,
"profileImageUrl": "https://pbs.twimg.com/...",
"pinnedTweetId": "1789..."
}- Accepts either userName (without @) or userId in the same request — pass whichever you have.
- Suspended / deactivated accounts return 404 with a structured error body, not an empty profile.
- followers / following counts are point-in-time snapshots; for delta tracking use the bulk follower endpoints below.
Get followers with full profile data
GET /twitter/user/followersReturns up to 200 followers per call with the same profile fields as /twitter/user/info. Cursor-based pagination walks the full follower list. Tiered pricing rewards larger pages.
import requests
cursor = None
all_followers = []
while True:
params = {"userName": "elonmusk", "count": 200}
if cursor:
params["cursor"] = cursor
resp = requests.get(
"https://api.twitterapi.io/twitter/user/followers",
headers={"X-API-Key": "YOUR_API_KEY"},
params=params,
)
data = resp.json()
all_followers.extend(data["followers"])
if not data.get("has_next_page"):
break
cursor = data["next_cursor"]
print(f"Collected {len(all_followers)} followers")let cursor = null;
const all = [];
while (true) {
const url = new URL("https://api.twitterapi.io/twitter/user/followers");
url.searchParams.set("userName", "elonmusk");
url.searchParams.set("count", "200");
if (cursor) url.searchParams.set("cursor", cursor);
const res = await fetch(url, { headers: { "X-API-Key": process.env.TWITTERAPI_IO_KEY } });
const data = await res.json();
all.push(...data.followers);
if (!data.has_next_page) break;
cursor = data.next_cursor;
}{
"followers": [
{
"id": "1234...",
"userName": "developer_jane",
"name": "Jane Developer",
"followers": 4128,
"following": 521,
"createdAt": "...T..."
}
],
"next_cursor": "DAADDAABCgABF...",
"has_next_page": true
}- Tiered pricing: 3 credits / follower at 20–99 per page, 2 credits at 100–199, 1 credit at 200 (max). Always request 200 when you want the cheapest unit cost.
- Some follower entries return without a fully hydrated profile (rare) — guard with a null check on .name in production.
- Cursor stability: cursors are stable for ~24 hours; restart the walk if you pause longer than that.
Get follower IDs only — cheapest at scale
GET /twitter/user/followers_idsWhen you only need the follower graph (not the profile of each follower), this endpoint returns up to 5,000 numeric IDs per call. Use it for graph diff jobs, audience overlap calculations, or as a cheap prefilter before hydrating profiles with /twitter/user/info on the IDs you actually care about.
import requests
resp = requests.get(
"https://api.twitterapi.io/twitter/user/followers_ids",
headers={"X-API-Key": "YOUR_API_KEY"},
params={"userName": "elonmusk", "count": 5000},
)
data = resp.json()
ids = data["ids"]
print(f"Got {len(ids)} follower IDs in one call")const url = new URL("https://api.twitterapi.io/twitter/user/followers_ids");
url.searchParams.set("userName", "elonmusk");
url.searchParams.set("count", "5000");
const res = await fetch(url, { headers: { "X-API-Key": process.env.TWITTERAPI_IO_KEY } });
const { ids, next_cursor, has_next_page } = await res.json();
console.info(`Got ${ids.length} follower IDs`);{
"ids": [123, 456, 789, 1011, 1213, ...],
"next_cursor": "DAADDAABCgABF...",
"has_next_page": true
}- Best unit cost: 0.45 credits per ID at 4,000–5,000 per call ≈ $0.0225 for 5,000 IDs.
- IDs are stable numeric identifiers — they survive handle changes, so this is the right input to long-running graph-tracking jobs.
- When you later want profile data for some IDs, batch them through /twitter/user/info instead of re-walking /followers.
Get following (who a user is following)
GET /twitter/user/followingsMirror of /followers, but for outbound graph edges. Same tiered pricing, same pagination model. Useful for influence-mapping (who does X listen to?), competitive analysis, and discovering adjacent communities.
import requests
resp = requests.get(
"https://api.twitterapi.io/twitter/user/followings",
headers={"X-API-Key": "YOUR_API_KEY"},
params={"userName": "elonmusk", "count": 200},
)
data = resp.json()
for u in data["followings"]:
print(u["userName"], "—", u["followers"], "followers")const res = await fetch(
"https://api.twitterapi.io/twitter/user/followings?userName=elonmusk&count=200",
{ headers: { "X-API-Key": process.env.TWITTERAPI_IO_KEY } }
);
const data = await res.json();
for (const u of data.followings) {
console.info(u.userName, "—", u.followers, "followers");
}{
"followings": [
{
"id": "44196397",
"userName": "elonmusk",
"name": "Elon Musk",
"followers": 218417022
}
],
"next_cursor": "DAADDAABCgABF...",
"has_next_page": true
}- Same tiered pricing as /followers — request the max page size of 200 for the best unit cost.
- Most users follow a few hundred accounts; you'll usually finish the walk in 1–10 requests.
- If you care about timestamps of when X started following Y, this endpoint does NOT expose that — Twitter's API doesn't either.
Cost calculator
Two of the most common scraper workloads, priced for your scale. Sliders use real per-call rates; not a marketing estimate.
Search tweets by keyword
Pull a follower graph
Pricing model: 100,000 credits = $1. Live per-endpoint rates at docs.twitterapi.io.
Limits and gotchas worth knowing before you build
These are the failure modes we field every week
- Time filters live inside the query string for advanced_search. Use
since_time:UNIXanduntil_time:UNIXas operators withinquery, not as separate request params. This is the most common first-day bug. - Pagination cursors are stable for about 24 hours. If you pause a walk longer than that, restart from the beginning — stale cursors fail silently with empty result sets.
- Soft-deleted tweets vanish. If a user deletes a tweet between two of your API calls, the second call no longer sees it. Cache aggressively on your side if you need archival completeness.
- Private / protected accounts are not accessible. You will see
isProtected: truein the user-info response but the follower / following lists return 403 — by design, matching what twitter.com itself shows to logged-out visitors. - Bulk follower pulls are tiered — always request the max page size. Setting
count=200for /followers (orcount=5000for /followers_ids) gives the cheapest unit cost; smaller pages cost 2–3× more per follower.
FAQ
Do I need to apply for Twitter Developer access to scrape tweets?+
No. TwitterAPI.io is an independent third-party service — there is no application, no review queue, no project gating. Sign in with Google, copy the API key from the dashboard, and you can hit /twitter/tweet/advanced_search the same minute. The official X Developer Platform requires project approval and gates real-time + historical access behind paid tiers; this is the friction we exist to remove.
How fresh is the data?+
Real-time stream endpoints push tweets within roughly 250 ms of being posted (P50 ≈ 251 ms, P90 ≈ 327 ms, measured on the live demo at twitterapi.io/twitter-stream). REST endpoints like advanced_search return the most recent matching tweets in under 1 second for typical queries.
What can I scrape?+
Public tweets, public user profiles, follower / following lists (with profiles or just numeric IDs), tweet replies, quote tweets, and tweet engagement metrics. Anything visible on twitter.com without logging in is in scope. Private accounts, DMs, and gated content are not accessible by design.
How does pricing work?+
Per-call credit billing, USD-denominated (100,000 credits = $1). No subscriptions, no minimums. Bulk follower pulls use tiered pricing: at 200 followers per page you pay 1 credit per follower; the IDs-only endpoint goes as low as 0.45 credits per ID for pages of 4,000–5,000. Full live rates at docs.twitterapi.io.
Is this allowed? What about Twitter's terms?+
TwitterAPI.io is independent — not affiliated with, endorsed by, or sponsored by X Corp. "Twitter" and "X" are trademarks of X Corp. We surface only publicly accessible information. You are responsible for how you use the data, especially regarding consent, GDPR / CCPA, and any downstream rate-limit policies of the systems you build.
Start scraping in five minutes
Sign in with Google, copy your API key from the dashboard, paste a sample above. No approval queue, no minimum spend, no OAuth dance.