Instagram Bulk Downloader
Paste up to 30 Instagram links, one per line. We parse them in parallel.
One paste box, thirty results
Drop a list of Reel / post / Story URLs into the textarea above, hit Download, watch them resolve one by one. Two run in parallel at any given moment — enough to feel fast, restrained enough to stay under Instagram's rate limits and keep our residential proxy pool healthy. 30 URLs at concurrency 2 is roughly 15 rounds; expect 2-3 minutes end to end on a clean batch. Each result gets its own card with the same Download / M4A / MP3 options as the single-URL flow.
How it works
- 1. Gather your Instagram URLsCopy each Reel, post, or Story link you want. Valid shapes: instagram.com/p/…, /reel/…, /reels/…, /tv/…, /stories/…. One URL per line.
- 2. Paste into the box aboveMulti-line paste is fine. We count valid URLs as you type and tell you how many will run.
- 3. Hit DownloadEach URL parses independently. Two run at a time; completed ones render their result card immediately so you can click through while the rest are still in flight. A live status line shows how many of the 30 are done, failed, or still pending.
- 4. Save each resultPer-URL Download button, same flow as the single-URL tool. Videos get merged MP4, photos get JPG/WEBP, carousels expose each slide separately.
Features
Up to 30 URLs per run
Three times the usual batch tool ceiling. 30 URLs at concurrency 2 lands in 2-3 minutes — fast enough to not walk away, slow enough to not burn our proxy pool. Re-run for larger lists.
Concurrency of 2
Two in flight at a time stays under both our /api/parse rate limit (20 req/min) and Instagram's per-IP flag threshold on residential proxies. Raising concurrency would finish faster but flag proxies faster too — not worth the trade.
Exponential backoff on 429
If /api/parse returns a rate-limit response, that URL retries at 2s / 4s / 8s before giving up. The other worker keeps draining the queue, so one slow URL doesn't stall the batch.
Partial failure is fine
If one URL errors (private post, expired Story, rate-limited CDN), the rest keep going. Error row shows exactly why, so you can re-paste the affected URL in the single-URL tool.
No ZIP, by design
A ZIP-all feature would need server-side buffering we don't currently do. Per-URL Download keeps everything streaming with no disk hit.
What we observed
- Verified on 2026-04-23 against production immediately after deploy. Pasted three lines into the bulk paste box on Mac Chrome: a /reel/ URL, a deliberately invalid string ("not-an-instagram-url"), and a /p/ URL with the same shortcode as the first. The UI correctly counted 2 valid URLs (button label read "Download 2") and skipped the middle line. Hit Download, watched both rows show up as "Parsing" simultaneously (concurrency 2), then complete within ~28 seconds total. Row #1 resolved to 'Reel by @National Geographic' with 4 MP4 variants (1080p, 720p, 540p, 360p) served off a different vp9-basic-gen2 encoding profile than the baseline H.264 we saw earlier in the day — Instagram had re-encoded the asset between tests, which our backend picked up transparently. Row #2 resolved to 'Photo by @National Geographic' with the same 4 variants (shortcode namespace is unified, same underlying media). No error state surfaced, no rate-limit triggered. First real validation of the concurrency-2 scheduler under live traffic. (2026-04-23)
FAQ
- Why 30 URLs max — not 100?
- Two hard constraints set the ceiling. Concurrency stays at 2 because our /api/parse rate limiter allows 20 req/min per IP and Instagram flags residential proxy IPs after sustained bursts. 30 URLs at concurrency 2 finishes in roughly 2-3 minutes — long enough that some users walk away and miss results, short enough to not burn the proxy pool. 100 would push past 8 minutes on a clean batch and much longer if any URL hits a rate limit and enters the 2s / 4s / 8s retry ladder. The UX breaks before the backend does. Run the tool in three waves of 30 if you genuinely need 90.
- Can I bulk-download someone's entire profile?
- No. Per-profile enumeration isn't built, and we don't plan to ship it. Scraping a creator's whole feed creeps toward ToS-adjacent territory even for public accounts. The bulk tool is for URLs you already have a list of, not for discovering new ones.
- What happens if a URL fails mid-batch?
- The row shows the error code and message. The other URLs continue processing. Rate-limit (429) responses trigger a built-in retry ladder — 2 seconds, then 4, then 8 — before the row gives up and reports RATE_LIMITED. Any other failure (private post, expired Story, CDN signature mismatch) surfaces immediately without retry. Re-paste the affected URL in the main downloader if you want a second attempt.
- Does the batch run sequentially or in parallel?
- Sliding-window parallel with concurrency 2. Two URLs are always in flight until the queue drains. Roughly 2x the throughput of sequential, while staying well under the rate limits that would trigger flags. When one worker is waiting out a retry backoff on a 429, the other keeps pulling from the queue — the batch doesn't stall on a single slow URL.
- Will CSV upload be supported?
- Not planned. A text area works — paste your CSV column into the box and it splits on newlines. If your URLs come with trailing share-attribution tokens (?igsh=…), we strip those automatically at paste time.
- Can I mix Reels, posts, photos, and Stories in one batch?
- Yes. Each URL routes through the same /api/parse endpoint our single-URL tool uses. Mixed batches work the same as homogeneous ones — each result card renders the right options for whatever media type it got back.
More downloaders
Something unclear? The FAQ covers format, quality, privacy, and legality. For a different content type, jump to every downloader we run. Team + contact on About.