twitterapi.io is an independent third-party service. Not affiliated with X Corp.

[ Twitter Scraper API ]

Twitter Scraper API — Get Tweets, Users, and Search Results in 250 ms

To scrape Twitter (X) at scale, developers use third-party APIs that wrap the public web data layer. TwitterAPI.io is one such API: an HTTP REST + WebSocket service exposing tweet search, user profiles, follower graphs, and real-time streams without an OAuth project queue or a $5,000-a-month minimum.

Sign in with Google, copy an API key, and you can hit /twitter/tweet/advanced_search inside of five minutes. The same advanced-search operators that work on twitter.com (from:, since:, #hashtag, filter:images) work here, plus a stream endpoint that pushes tweets within roughly 250 ms of being posted.

Why a third-party Twitter scraper API?

The official X Developer Platform gates real-time and historical access behind paid tiers — the Basic plan is $100/month with restrictive endpoints, Pro is $5,000/month, and Enterprise starts in five figures. Approval queues stretch from days to weeks. Third-party scraper APIs solve a different problem: predictable per-call pricing, no project review, and no rate-limit cliff at the free / paid boundary.

ProviderTypical pricingMinimum planSetup time
TwitterAPI.iothis site$0.0001 per typical request, no minimum$0 (free credits on signup)< 5 minutes
X Developer Platform (official)Pro $5,000/month, Enterprise $42,000+/month$100/mo (Basic, limited endpoints)Days to weeks (approval queue)
Apify Twitter Scraper$0.40 per 1,000 tweets + compute time$49/mo + per-run feesHours (run config)
Bright Data Twitter ScraperCustom enterprise quote$500+/mo committedSales-led

Competitor pricing sourced from each vendor's public pricing page; verify the current rate at the source before purchasing. TwitterAPI.io is not affiliated with the listed third parties.

Five endpoints that cover 90% of scraping use cases

Each block below has the real path, a Python + Node sample you can paste straight into a script, a representative response shape, and the gotchas we have seen in production.

Get a user's profile, bio, follower count, and verification status

GET /twitter/user/info

Single-user lookup by handle or numeric ID. Returns the full profile object — display name, bio, location, signup date, follower / following / tweet counts, verification badge, profile image URLs, pinned tweet ID, and protected-flag.

Python
import requests

resp = requests.get(
    "https://api.twitterapi.io/twitter/user/info",
    headers={"X-API-Key": "YOUR_API_KEY"},
    params={"userName": "elonmusk"},
)
profile = resp.json()
print(profile["name"], "—", profile["followers"], "followers")
Node.js
const res = await fetch(
  "https://api.twitterapi.io/twitter/user/info?userName=elonmusk",
  { headers: { "X-API-Key": process.env.TWITTERAPI_IO_KEY } }
);
const profile = await res.json();
console.info(profile.name, "—", profile.followers, "followers");
Sample response (truncated)
{
  "id": "44196397",
  "userName": "elonmusk",
  "name": "Elon Musk",
  "description": "...",
  "location": "",
  "createdAt": "...T...",
  "followers": 218417022,
  "following": 1054,
  "tweetCount": 81234,
  "verified": true,
  "isProtected": false,
  "profileImageUrl": "https://pbs.twimg.com/...",
  "pinnedTweetId": "1789..."
}
Gotchas
  • Accepts either userName (without @) or userId in the same request — pass whichever you have.
  • Suspended / deactivated accounts return 404 with a structured error body, not an empty profile.
  • followers / following counts are point-in-time snapshots; for delta tracking use the bulk follower endpoints below.

Get followers with full profile data

GET /twitter/user/followers

Returns up to 200 followers per call with the same profile fields as /twitter/user/info. Cursor-based pagination walks the full follower list. Tiered pricing rewards larger pages.

Python
import requests

cursor = None
all_followers = []

while True:
    params = {"userName": "elonmusk", "count": 200}
    if cursor:
        params["cursor"] = cursor

    resp = requests.get(
        "https://api.twitterapi.io/twitter/user/followers",
        headers={"X-API-Key": "YOUR_API_KEY"},
        params=params,
    )
    data = resp.json()
    all_followers.extend(data["followers"])

    if not data.get("has_next_page"):
        break
    cursor = data["next_cursor"]

print(f"Collected {len(all_followers)} followers")
Node.js
let cursor = null;
const all = [];

while (true) {
  const url = new URL("https://api.twitterapi.io/twitter/user/followers");
  url.searchParams.set("userName", "elonmusk");
  url.searchParams.set("count", "200");
  if (cursor) url.searchParams.set("cursor", cursor);

  const res = await fetch(url, { headers: { "X-API-Key": process.env.TWITTERAPI_IO_KEY } });
  const data = await res.json();
  all.push(...data.followers);

  if (!data.has_next_page) break;
  cursor = data.next_cursor;
}
Sample response (truncated)
{
  "followers": [
    {
      "id": "1234...",
      "userName": "developer_jane",
      "name": "Jane Developer",
      "followers": 4128,
      "following": 521,
      "createdAt": "...T..."
    }
  ],
  "next_cursor": "DAADDAABCgABF...",
  "has_next_page": true
}
Gotchas
  • Tiered pricing: 3 credits / follower at 20–99 per page, 2 credits at 100–199, 1 credit at 200 (max). Always request 200 when you want the cheapest unit cost.
  • Some follower entries return without a fully hydrated profile (rare) — guard with a null check on .name in production.
  • Cursor stability: cursors are stable for ~24 hours; restart the walk if you pause longer than that.

Get follower IDs only — cheapest at scale

GET /twitter/user/followers_ids

When you only need the follower graph (not the profile of each follower), this endpoint returns up to 5,000 numeric IDs per call. Use it for graph diff jobs, audience overlap calculations, or as a cheap prefilter before hydrating profiles with /twitter/user/info on the IDs you actually care about.

Python
import requests

resp = requests.get(
    "https://api.twitterapi.io/twitter/user/followers_ids",
    headers={"X-API-Key": "YOUR_API_KEY"},
    params={"userName": "elonmusk", "count": 5000},
)
data = resp.json()
ids = data["ids"]
print(f"Got {len(ids)} follower IDs in one call")
Node.js
const url = new URL("https://api.twitterapi.io/twitter/user/followers_ids");
url.searchParams.set("userName", "elonmusk");
url.searchParams.set("count", "5000");

const res = await fetch(url, { headers: { "X-API-Key": process.env.TWITTERAPI_IO_KEY } });
const { ids, next_cursor, has_next_page } = await res.json();
console.info(`Got ${ids.length} follower IDs`);
Sample response (truncated)
{
  "ids": [123, 456, 789, 1011, 1213, ...],
  "next_cursor": "DAADDAABCgABF...",
  "has_next_page": true
}
Gotchas
  • Best unit cost: 0.45 credits per ID at 4,000–5,000 per call ≈ $0.0225 for 5,000 IDs.
  • IDs are stable numeric identifiers — they survive handle changes, so this is the right input to long-running graph-tracking jobs.
  • When you later want profile data for some IDs, batch them through /twitter/user/info instead of re-walking /followers.

Get following (who a user is following)

GET /twitter/user/followings

Mirror of /followers, but for outbound graph edges. Same tiered pricing, same pagination model. Useful for influence-mapping (who does X listen to?), competitive analysis, and discovering adjacent communities.

Python
import requests

resp = requests.get(
    "https://api.twitterapi.io/twitter/user/followings",
    headers={"X-API-Key": "YOUR_API_KEY"},
    params={"userName": "elonmusk", "count": 200},
)
data = resp.json()
for u in data["followings"]:
    print(u["userName"], "—", u["followers"], "followers")
Node.js
const res = await fetch(
  "https://api.twitterapi.io/twitter/user/followings?userName=elonmusk&count=200",
  { headers: { "X-API-Key": process.env.TWITTERAPI_IO_KEY } }
);
const data = await res.json();
for (const u of data.followings) {
  console.info(u.userName, "—", u.followers, "followers");
}
Sample response (truncated)
{
  "followings": [
    {
      "id": "44196397",
      "userName": "elonmusk",
      "name": "Elon Musk",
      "followers": 218417022
    }
  ],
  "next_cursor": "DAADDAABCgABF...",
  "has_next_page": true
}
Gotchas
  • Same tiered pricing as /followers — request the max page size of 200 for the best unit cost.
  • Most users follow a few hundred accounts; you'll usually finish the walk in 1–10 requests.
  • If you care about timestamps of when X started following Y, this endpoint does NOT expose that — Twitter's API doesn't either.

Cost calculator

Two of the most common scraper workloads, priced for your scale. Sliders use real per-call rates; not a marketing estimate.

Search tweets by keyword

50,000 tweets/mo
Estimated monthly cost
$7.5
At ~$0.00015 per tweet returned via advanced_search

Pull a follower graph

1,000,000 followers
With full profile (followers endpoint)$10
IDs only (followers_ids endpoint)$4.5
IDs-only is 55% cheaper. Use it for graph diff / overlap; hydrate selected IDs through /twitter/user/info afterwards.

Pricing model: 100,000 credits = $1. Live per-endpoint rates at docs.twitterapi.io.

Limits and gotchas worth knowing before you build

  • Time filters live inside the query string for advanced_search. Use since_time:UNIX and until_time:UNIX as operators within query, not as separate request params. This is the most common first-day bug.
  • Pagination cursors are stable for about 24 hours. If you pause a walk longer than that, restart from the beginning — stale cursors fail silently with empty result sets.
  • Soft-deleted tweets vanish. If a user deletes a tweet between two of your API calls, the second call no longer sees it. Cache aggressively on your side if you need archival completeness.
  • Private / protected accounts are not accessible. You will see isProtected: true in the user-info response but the follower / following lists return 403 — by design, matching what twitter.com itself shows to logged-out visitors.
  • Bulk follower pulls are tiered — always request the max page size. Setting count=200 for /followers (or count=5000 for /followers_ids) gives the cheapest unit cost; smaller pages cost 2–3× more per follower.

FAQ

Do I need to apply for Twitter Developer access to scrape tweets?+

No. TwitterAPI.io is an independent third-party service — there is no application, no review queue, no project gating. Sign in with Google, copy the API key from the dashboard, and you can hit /twitter/tweet/advanced_search the same minute. The official X Developer Platform requires project approval and gates real-time + historical access behind paid tiers; this is the friction we exist to remove.

How fresh is the data?+

Real-time stream endpoints push tweets within roughly 250 ms of being posted (P50 ≈ 251 ms, P90 ≈ 327 ms, measured on the live demo at twitterapi.io/twitter-stream). REST endpoints like advanced_search return the most recent matching tweets in under 1 second for typical queries.

What can I scrape?+

Public tweets, public user profiles, follower / following lists (with profiles or just numeric IDs), tweet replies, quote tweets, and tweet engagement metrics. Anything visible on twitter.com without logging in is in scope. Private accounts, DMs, and gated content are not accessible by design.

How does pricing work?+

Per-call credit billing, USD-denominated (100,000 credits = $1). No subscriptions, no minimums. Bulk follower pulls use tiered pricing: at 200 followers per page you pay 1 credit per follower; the IDs-only endpoint goes as low as 0.45 credits per ID for pages of 4,000–5,000. Full live rates at docs.twitterapi.io.

Is this allowed? What about Twitter's terms?+

TwitterAPI.io is independent — not affiliated with, endorsed by, or sponsored by X Corp. "Twitter" and "X" are trademarks of X Corp. We surface only publicly accessible information. You are responsible for how you use the data, especially regarding consent, GDPR / CCPA, and any downstream rate-limit policies of the systems you build.

Start scraping in five minutes

Sign in with Google, copy your API key from the dashboard, paste a sample above. No approval queue, no minimum spend, no OAuth dance.