InstaYolo
·by torrance·bulk downloadtutorialrate limitconcurrencybatch workflow

Bulk download Instagram videos: the honest tutorial (30 URLs, concurrency 2, no scraping)

You have a list of Instagram URLs — brand posts from a campaign, your own Reels for backup, a research corpus of public content — and one-at-a-time pasting has become absurd. This is the tutorial for doing it right: 30 URLs per batch, two in flight at a time, automatic retry on rate limits, per-URL failure rows so partial batches still deliver. No profile scraping. No credential harvesting. No magic.

Who this tutorial is actually for

Bulk Instagram downloading gets a bad reputation because a specific sketchy use case — silently mass-archiving someone else's private life — dominates the search results. That's not who we're writing this for, and that's not what our tool does. The people who actually need bulk mode are boring in the best way.

Social media managers collecting every post from a Q1 brand campaign before the client wants a recap deck. Content agencies pulling together a UGC inspiration board from 25 creators they already have release forms with. Researchers building a public-Reels dataset for a paper on short-form video compression. Journalists archiving posts from a public figure's feed before a story drops in case things get deleted. Creators backing up their own content because Instagram's download-your-data export is slow and misses the good quality.

All five of those workflows share the same bottleneck: the list of URLs already exists, and pasting them one by one into a single-URL tool is the wrong shape of work. That's the problem bulk mode solves. Not discovery. Not scraping. Just throughput on a list you already have.

What our bulk downloader actually does

Paste up to 30 Instagram URLs into the textarea on /bulk-downloader, one per line. The paste box accepts any public content URL — /reel/, /reels/, /p/, /tv/, /stories/, Highlights — and mixes them fine. It counts valid URLs as you type and the button label updates ("Download 27" instead of "Download 30" if three lines weren't URL-shaped). Hit Download and the batch starts.

Two URLs parse at once. As each finishes, its result card renders immediately below the input — you can click through and save variant #1 while variants #2 through #30 are still being processed. No "all done" screen to wait for. A live status line at the top counts done / failed / pending.

Each card gives you the same options as our single-URL tool: merged MP4 with the audio track attached, M4A audio-only, MP3 transcode. Carousels expose each slide separately. Photos come through as JPG or WEBP depending on what Instagram's CDN serves. Videos get remuxed to MP4 container with `-c copy` so whatever codec Instagram sent (H.264 or the newer VP9 rollout) passes through untouched.

30 URLs at concurrency 2, on a clean batch, lands in about 2 to 3 minutes end to end. Enough time to go make coffee. Not enough time to forget what you were doing.

Why concurrency 2 — and why that's not an arbitrary number

The obvious question when you first see "concurrency 2": why not 10? Or 30? Machines are fast. The reason isn't our compute — it's the rate-limit math on the other side.

Two ceilings stack. Our own /api/parse endpoint rate-limits at 20 requests per minute per IP. Instagram's CDN flags residential proxy exits after sustained request bursts. Two in flight with an average parse time of 8-15 seconds per URL lands somewhere near 8-12 requests per minute from any single user session — comfortably under our internal ceiling, and low enough to keep residential proxy IPs in the healthy pool rather than getting rotated out after a 429.

Raising concurrency to 5 would finish 30 URLs faster on paper. In practice, it would push request rate past 25/min, trigger our own limiter on maybe every third batch, and burn residential proxy IPs at a rate that degrades the tool for everyone else on the platform at the same time. 2 is the sweet spot where throughput is high enough that users don't walk away and low enough that proxy pool health stays intact. We wrote the full proxy-pool architecture at /blog/how-residential-proxies-bypass-instagram-cdn — the short version is that every request costs a unit of IP-reputation budget, and spending that budget carelessly breaks the tool, not Instagram.

The counterintuitive part: throttling your own tool is the thing that keeps it working. Most Instagram downloaders that disappeared between 2023 and 2025 didn't get shut down by Meta. They got outrun by their own traffic — too many users, concurrency cranked too high, proxy pool collapsed into a 429 loop, refund requests spiked, operator moved on. Conservative defaults are boring infrastructure insurance.

The 30-URL cap, honestly explained

30 is the cap. Not 100. Not 1,000. That feels stingy at first glance — most people asking about bulk download have lists closer to 80 URLs than 30.

Here's the math. At concurrency 2, 30 URLs finish in roughly 15 rounds of ~8-15 seconds each, which is 2-3 minutes. 100 URLs would be 50 rounds, or 8-12 minutes — and somewhere around minute 4 the residential proxy pool starts seeing elevated 429 rates because the same user session has been hammering the same handful of exit IPs. The retry ladder (2s, 4s, 8s) kicks in more frequently, and batch completion time doesn't scale linearly. 100 URLs is often closer to 15 minutes than 10.

The UX breaks before the backend does. Users walking away from a 10-minute batch and forgetting they ran it is worse for everyone than being told up front: this tool does 30, and if you need 90 you run it three times. Three 3-minute batches with a 60-second gap between them is actually faster than one 12-minute batch that hits rate limits halfway through. Counterintuitive, but that's what the numbers show.

We may raise the cap later. Any bump waits on real data — success-rate telemetry across live batches, proxy pool health under higher sustained load. Not a gut call.

Exponential backoff on 429: what actually happens when you hit a rate limit

Every bulk downloader that works reliably has to answer one question: what happens when one URL out of 30 gets a 429 mid-batch? Three wrong answers — retry immediately (makes it worse), fail instantly (wastes URLs that would succeed on a second look), stall the whole batch on one URL (positions 8-30 now wait for position 7).

Our answer: exponential backoff on a single worker, while the other worker keeps draining the queue. When /api/parse returns 429 for a specific URL, that URL retries at 2 seconds, then 4, then 8, before giving up and reporting RATE_LIMITED on its card. The other worker is completely unaffected — it keeps pulling fresh URLs from the queue the whole time. Total added latency from one rate-limited URL is 2+4+8 = 14 seconds on the single row that hit it, zero added latency on everything else.

In our verified 2026-04-23 live-traffic test on /bulk-downloader — three mixed URLs from @natgeo including a deliberately invalid string — both workers resolved within ~28 seconds total with zero 429 trips. Clean-batch baseline. We haven't yet run a public test engineered to deliberately trip the retry ladder — that's on the roadmap as telemetry matures. Retry logic is there, battle-tested in staging, and surfaces on the card when it fires.

For what else goes wrong in the wild, /blog/instagram-downloader-not-working catalogs all six failure modes that break single-URL and bulk tools alike. Rate limit is #1. The rest of the taxonomy applies to bulk mode identically — just with per-row error visible on each card.

What URLs you can actually mix in one batch

Anything public. Seriously — Reels at /reel/SHORTCODE/ or /reels/SHORTCODE/, posts at /p/SHORTCODE/, Stories at /stories/USERNAME/STORY_ID/, Highlights at /stories/highlights/HIGHLIGHT_ID/, IGTV at /tv/SHORTCODE/. All of them route through the same /api/parse endpoint, which figures out the content type from the URL pattern and dispatches to the right yt-dlp handler.

Mixed batches behave identically to homogeneous ones. You can paste 10 Reels + 10 Stories + 10 carousel posts and the results render with the right media type per row — videos get the MP4/M4A/MP3 option set, photos get single-image download, carousels expand to per-slide rows.

One thing we handle automatically that saves you a spreadsheet pass: share-attribution tokens. When you copy a URL from the Instagram app's share menu, it comes with a trailing `?igsh=AB3F2...` tracking token that Instagram uses to attribute which user shared what. Our paste box strips those on arrival so you don't have to. Same goes for trailing `?utm_source=` parameters from web shares.

One thing we do not handle automatically: deduplication. If you paste the same /reel/ URL twice, we treat it as two URLs and process it twice. Both will succeed and return the same media, but you just burned a slot of your 30. Deduplicate your list before pasting if slots are scarce. A one-line sort-and-unique in your text editor does the job.

When partial batches fail: what the result looks like

A perfectly clean batch of 30 succeeds 30 times. A realistic batch doesn't — some fraction of URLs will fail, and the question is whether the tool tells you precisely why or just shows a red X.

Our cards surface the actual error per row. Rate-limited rows say RATE_LIMITED after the retry ladder gave up. Private-account rows say something like "This account is private" verbatim from Instagram's response. Expired Stories past the 24-hour window say "content unreachable" — the same wording yt-dlp returns from Instagram's native 404 response. CDN signature mismatches (rare, mostly affects photo URLs that sat around in a cache somewhere) say SIGNATURE_MISMATCH.

Why does this matter? Because the fix is different for each. A RATE_LIMITED row should be re-pasted into the single-URL tool 60 seconds later; it'll almost always succeed. A private-account row will never succeed anywhere, and no amount of retry helps. An expired Story is gone from Meta's infrastructure entirely — archive.org occasionally has screenshots but the original bytes are deleted. Knowing which bucket a failure falls into is the difference between 30 seconds of retry and an hour of chasing dead URLs.

Our full failure-mode taxonomy — all 6 categories, how to diagnose each, what actually fixes them — lives at /blog/instagram-downloader-not-working. If you're running bulk batches at any scale, bookmark that one. It's the reference for interpreting the error codes on your result cards.

Tips for reliable bulk runs

A few practical things that raise success rates on large lists. None are required — the defaults work — but these move the needle from "good" to "great" if you're doing this weekly.

Deduplicate first. Your 30 slots are valuable. Run your list through `sort -u` or VSCode's "Sort Lines (Unique)" before pasting. A batch of 22 unique URLs finishes faster and costs less proxy budget than 30 with duplicates.

Prefer full URLs over shortcodes. The paste box accepts bare shortcodes but for Reels specifically, the /reel/ path sometimes has different auth behavior than /p/ on Instagram's side. Full URLs remove that ambiguity.

Run off-peak if you hit rate limits consistently. The proxy pool is shared across all tool users, so peak hours (North American evening / EU morning overlap) see higher baseline 429 rates than 3am anywhere. Shifting to off-peak usually doubles success rate on research-scale workflows.

Batch your batches. 90 URLs? Run 3 sequential batches of 30 with a 60-90 second gap. The gap lets burned proxy IPs cycle back into the healthy pool. Faster end-to-end than cramming everything into one run.

What bulk mode won't do — and why

We don't support profile-wide enumeration. If you paste a creator's profile URL like instagram.com/natgeo/, nothing happens — it's not a content URL and the tool doesn't treat it as an instruction to "pull everything." That's deliberate.

Tools that do profile scraping exist. Most of them violate Instagram's Terms of Service explicitly, many of them require a bot-account cookie pool that gets burned on rotation, and all of them sit in a category that attracts different users than ours does. We decided early that the line between "user has a list of URLs they already want" and "tool goes find content on the user's behalf" is the line we won't cross. Public content is fine. Discovering and exhaustively enumerating someone's entire feed — even on a public account — isn't what we're building.

If you genuinely need profile-wide archival of your own content, Instagram's built-in "Download Your Information" flow (Settings → Accounts Center → Your information and permissions → Download your information) handles it officially. Slow, and the quality is lower than the original uploads in some cases, but it's the right tool for that job.

Bulk mode is for lists. You curate the list, we resolve the URLs. Clear boundary.

Worked example: brand-archival workflow

The most common bulk use case we see in the referral traffic: social media managers archiving a brand's campaign posts before a quarterly recap. Here's the actual shape of that workflow.

Step one: export the URL list from whatever tracking system the agency uses — a Notion database, a Google Sheet, sometimes just a Slack thread the PM kept. You want one column of Instagram URLs, 20-50 rows typically. Paste that column into a plain text editor, one URL per line, and run dedup.

Step two: split into batches of 30 if needed. For 47 URLs, that's batch 1 (30) and batch 2 (17). Don't try to do 47 in one shot; our tool caps at 30 anyway.

Step three: paste batch 1 into /bulk-downloader. Hit Download. While the 30 cards are resolving, open a folder and get ready to save files. As each card finishes, click Download on the one you need. Videos come through as merged MP4 with audio — no separate audio file to worry about.

Step four: any RATE_LIMITED or failed rows, copy those URLs aside. Wait until batch 2 finishes, then go back and re-paste the failures into our single-URL /reels-downloader or /video-downloader. Most will succeed on the second attempt, usually within 60 seconds.

Step five: for Reels specifically, if the recap deck needs audio waveforms or quotes, use the M4A or MP3 option on individual cards to pull the audio track. No separate /reels-to-mp3 run needed; bulk mode exposes the same options per card.

47 URLs, start to finish, usually 8-10 minutes including manual file saving. Beats pasting one at a time for 45 minutes.

Worked example: your-own-content backup

Backing up your own feed before a migration, rebrand, or potential account issue. Instagram's official "Download Your Information" flow covers this via email-link export, but the process is slow (sometimes 48+ hours during peak) and the export sometimes drops video quality compared to the original upload.

Our tool is faster for the subset you actually care about. Open your profile in a browser, scroll the grid, copy the URL of each post worth preserving, collect into a text file, run through /bulk-downloader in batches of 30. Reels, posts, carousels, IGTV all come through at whatever the public CDN is serving — typically 1080p H.264 or VP9 depending on the encoder lottery that day (we caught one such flip live at /blog/instagram-vp9-transition-caught-live).

What bulk mode can't recover: Stories already past the 24-hour window (purged from Meta's infrastructure — no tool can bring them back), close-friends-only content, or posts you already deleted. For live Stories with time remaining, /story-downloader handles them — paste fast.

The technical anchor, for the curious

What's actually running under the hood: a sliding-window scheduler in the browser maintains exactly 2 in-flight POSTs to /api/parse at any time. Each hits a Node handler that spawns yt-dlp routed through a random exit from our Webshare residential proxy pool, which returns a manifest of available variants — typically 4 MP4 variants at 360p/540p/720p/1080p for videos, single JPG/WEBP for photos, a per-slide array for carousels.

Click Download on a variant and a second POST to /api/download streams those bytes through our server — with ffmpeg `-c copy` remux for videos that need audio + video stream merging (most Reels; DASH serves them as separate tracks — /blog/instagram-dash-streaming-explained has the mechanism). Every parse generates fresh CDN URLs because signed CDN URLs carry an `oe=` timestamp that expires (anatomy at /blog/instagram-cdn-url-signature-anatomy). No caching across sessions, no stored URLs, no stored files. Throughput tool, not surveillance tool.

Expert Tip

If you're running bulk batches on a list that's been sitting in a spreadsheet for weeks, the hit rate on older URLs is noticeably worse than on fresh ones. The reason isn't anything our tool does — it's that Instagram's content churn rate is higher than most people realize. Posts get deleted. Accounts go private. Stories expire (those especially — if any URL on your list looks like /stories/, confirm it's less than 24 hours old before it gets to us). The older the list, the more dead URLs you're wasting slots on. Before a big batch run, spot-check 3-4 URLs from the top of the list in an incognito browser tab. If those load cleanly, the list is probably still good. If you see "Not Available" or a login wall on more than one, audit the list before pasting. Five minutes of triage beats a batch with 12 dead rows. For more on why URLs die, /blog/instagram-downloader-not-working has the full diagnostic tree.

Ready to run a batch

Head to /bulk-downloader and paste your list. 30 URLs per batch, two in flight at a time, automatic retry on rate limits, per-URL result cards. Reels, posts, Stories, Highlights, carousels — all fine in one mixed batch.

For single URLs, /reels-downloader handles Reels with the same variant picker, and /video-downloader is the catch-all for any public Instagram video URL. If something fails and the error code doesn't make sense, /blog/instagram-downloader-not-working is the diagnostic reference.

Not affiliated with Instagram or Meta. Public content only. We don't store your URLs, we don't store your downloads, and we'll never ask you to log into your Instagram account to "improve results." That's a hard rule in our project constitution and walking away from tools that ask is a good habit regardless.

FAQ

Why can I only paste 30 URLs at a time?
Two constraints set the ceiling. Concurrency stays at 2 because our /api/parse rate limiter allows 20 req/min per IP, and Instagram flags residential proxy exits after sustained request bursts. 30 URLs at concurrency 2 finishes in 2-3 minutes on a clean batch — long enough that some users would walk away from a 10-minute run but short enough to keep proxy pool health intact. 100 URLs would stretch to 8-12 minutes and start tripping 429s halfway through. Three batches of 30 with a 60-second gap is faster than one batch of 90 that hits rate limits. Not an arbitrary cap — the UX breaks before the backend does.
Can I mix Reels, posts, Stories, and Highlights in one batch?
Yes. All public content URLs route through the same /api/parse endpoint, which detects the content type from the URL shape and dispatches to the right handler. A batch with 10 Reels + 10 photo posts + 10 Stories behaves identically to 30 of the same type — each result card renders whatever options fit the media. We strip trailing ?igsh= share-attribution tokens automatically on paste, so copy-paste from the Instagram share menu works as-is.
What happens if 5 URLs fail mid-batch?
The other 25 complete normally. Each failed row shows its specific error — RATE_LIMITED (with built-in retry at 2s/4s/8s before giving up), private account, expired Story, CDN signature mismatch, etc. Rate-limited failures usually succeed on a single-URL retry 60 seconds later. Private-account and expired-Story failures are permanent and no tool can recover them. Knowing which bucket the failure falls into is the difference between 30 seconds of retry and an hour of chasing dead URLs — our /blog/instagram-downloader-not-working has the full diagnostic tree.
Can bulk mode download an entire creator's profile?
No, and we don't plan to add that. Profile enumeration crosses into ToS-adjacent territory even for public accounts, and the tools that do it tend to need bot-account cookie pools that violate Instagram's terms explicitly. Our line: you bring the list of URLs, we resolve them. Discovery isn't our job. If you need an archive of your own full feed, Instagram's built-in "Download Your Information" flow handles it officially — slower and lower-quality than our tool for the subset you actually care about, but the right tool for whole-profile archival.
Why concurrency 2 instead of 5 or 10?
Rate-limit math. Two in flight with 8-15 second average parse times lands at 8-12 requests per minute — comfortably under our 20/min internal limit and low enough to keep residential proxy IPs in the healthy pool. Concurrency 5 would finish faster on paper but trigger our limiter roughly every third batch and burn proxy IPs at a rate that degrades the tool for everyone. Throttling the tool is what keeps the tool working — most Instagram downloaders that disappeared in 2023-2025 didn't get shut down; they got outrun by their own traffic. Conservative defaults are infrastructure insurance. More on the proxy strategy at /blog/how-residential-proxies-bypass-instagram-cdn.
Do you store the URLs I paste or the files I download?
No. Our server holds bytes exactly long enough to stream them to you (and ffmpeg -c copy remux videos that need audio + video stream merging — most Reels, because DASH streaming serves them separately; /blog/instagram-dash-streaming-explained has the full mechanism). Nothing persists after the request finishes. No caching across sessions, no URL log, no file retention. Project constitution rule: we don't build a profile of what you're pulling. Throughput tool, not surveillance tool.

Related tools