Cloudflare R2 vs S3 for Static Assets: The Egress Bill Is the Whole Argument

A friend forwarded me an AWS bill last month. A side-project landing page, a single hero video, a Hacker News front page. The S3 storage line was forty-three cents. The egress line was four hundred and twelve dollars. The video had been viewed about ninety thousand times.
He hadn't done anything wrong. The video sat in a perfectly ordinary S3 bucket behind a CloudFront distribution, exactly the way every AWS tutorial recommends. His mistake — to the extent there was one — was choosing S3 in 2026 instead of R2.
That bill is the entire argument. Egress pricing is the only thing that matters when you're picking object storage for static assets, and it's the dimension where S3 and R2 have diverged so completely that the comparison is now lopsided. The interesting question isn't which is better — it's under what specific conditions does S3 still make sense, and the honest answer is fewer than you'd guess.
What Egress Actually Costs on S3
Let's start with numbers, because the conversation usually gets vague here. S3 charges $0.023/GB for standard storage in us-east-1 and $0.09/GB for outbound data transfer to the internet. The storage cost is trivial. The egress cost is everything.
A single 50MB video served to 100,000 viewers is 5,000 GB of egress. At $0.09/GB, that's $450. If you're serving it through CloudFront, the price drops to $0.085/GB on the first 10TB tier and lower from there, plus per-request fees. Still: $400+ for a viral moment. The storage of that same video for an entire month costs about a penny.
This is why every senior engineer who's been burned by an S3 bill has the same reflex: put a CDN in front of it. CloudFront, Fastly, Cloudflare — pick one. The CDN reduces origin egress to a trickle by caching at the edge, and you pay the CDN's egress rates instead of S3's. For a long time, that was the conventional wisdom: S3 for storage, CDN for delivery, and don't think too hard about the math.
R2 collapsed that architecture into a single bill, and it did it by making the egress line go away entirely.
What R2 Actually Costs
Cloudflare R2's pricing fits on one line: $0.015/GB/month for storage, zero for egress. Operations cost a small amount — Class A (writes) are $4.50 per million; Class B (reads) are $0.36 per million — but for a static asset workload, those are noise.
That same 100,000-view, 50MB video on R2 costs: storage for the video (a fraction of a penny per month), zero egress, and roughly $0.04 in Class B operation fees. Total bill: under five cents for the burst that cost $400+ on S3.
The reason R2 can do this is structural, not promotional. Cloudflare has spent fifteen years building a global network paid for by other lines of business, and they treat egress from that network as a sunk cost. They don't need to charge for it the way AWS does because their accounting works differently. Whether that's sustainable forever isn't the question — the question is what to build today, and today the rate card says zero.
There's a worth-mentioning footnote: R2 is S3-API-compatible. The same aws-sdk calls work against R2 with a different endpoint URL. Most existing S3 code ports in an afternoon, which is the second reason the comparison is lopsided. The switching cost is small.
When R2 Obviously Wins
R2 is the correct choice for any static asset workload where egress is non-trivial. That's a much wider set than people realize:
Hero images and video on marketing sites. A landing page with a 2MB hero image and 50,000 monthly visitors is 100GB of egress per month. That's nine dollars on S3 with no CDN, or pennies on R2.
Public downloads. Software installers, PDF whitepapers, podcast MP3s, dataset releases. Any time a file is downloaded in full by many users, you're paying per-byte egress on S3 and paying nothing on R2.
User-generated media on consumer apps. Profile photos, video uploads, document attachments. If your app has any consumer reach, the read traffic dwarfs the write traffic, and reads are exactly what S3 charges most for.
Backup tarballs and dev artifacts shared with clients. This is the one most teams forget. A weekly client export, a build artifact dropped for QA, a video file shared via a signed URL — these are infrequent but large, and on S3 they each cost a measurable amount.
Anything that might go viral. This is the case my friend hit. You don't know in advance that a thing will be popular. With S3, the cost of being wrong is a four-figure bill. With R2, the cost of being wrong is rounding.
For all of these, the architecture is identical to S3: bucket, public read access (or signed URLs), point a domain at it, done. You don't need a CDN in front of R2 — Cloudflare's network is the CDN. The single-tier pricing means you don't have to negotiate between origin and edge.
When S3 Still Wins
This is the section that usually gets skipped in R2 evangelism, so it deserves real attention. S3 is not obsolete. There are genuine cases where it remains the right tool.
You're already deep inside AWS and your data has gravity. If your application runs on EC2 in us-east-1 and S3 holds the data being processed by Lambda, Glue, Athena, or anything else AWS-native, the egress to those services is free — it's intra-region traffic, not internet egress. R2 would force every read to leave AWS, hit the public internet, and come back, which is both slower and more expensive in transfer. S3 wins here on physics, not pricing.
You need lifecycle policies, Intelligent-Tiering, or Glacier. S3 has the most mature object-lifecycle features in the industry. If you're storing terabytes of cold data and want automatic transitions between hot, warm, cold, and deep archive tiers based on access patterns, S3 has built tooling for that for over a decade. R2 has a single tier. For a long-tail archive workload — medical imaging, legal discovery, log retention — S3's tiering can be cheaper than R2's flat rate.
You need specific compliance certifications S3 has and R2 might not. AWS has spent enormous effort accumulating compliance certifications: FedRAMP High, IL5, HIPAA BAAs at every level, regional sovereignty options for European data, the whole alphabet soup. R2 has many of these, but not all, and not always at the same tier. For regulated workloads, the answer is to check the current certification list against your requirements rather than assume.
You depend on S3-specific features beyond the core API. Object Lock for WORM compliance, S3 Replication across regions, S3 Batch Operations for large-scale transformations, event notifications wired into SQS or Lambda. R2 has analogs for some of these and not others. If your architecture relies on these features, S3 stays.
You need extreme write throughput from inside AWS. S3 has had years of optimization on hot-key write performance and parallel multipart upload behavior. For workloads writing tens of thousands of objects per second from AWS compute, S3 is still the more predictable choice.
Notice what's not on this list: cost-sensitive read-heavy static asset hosting. That's exactly the case R2 was designed to win, and it does.
The Math, Concretely
Let me make this less hand-wavy with an actual example. Say you're hosting:
- 100GB of static assets (images, videos, downloads)
- Serving 5TB of egress per month
- Roughly 10 million read operations per month
S3 (no CDN):
- Storage: 100GB × $0.023 = $2.30
- Egress: 5,000GB × $0.09 = $450.00
- GET operations: 10M × $0.0004/1K = $4.00
- Total: ~$456/month
S3 + CloudFront:
- Storage: $2.30
- Egress S3 → CloudFront: $0 (origin egress to CloudFront is free)
- CloudFront egress: 5,000GB × $0.085 = $425.00
- CloudFront requests: 10M × $0.0075/10K = $7.50
- Total: ~$435/month
R2:
- Storage: 100GB × $0.015 = $1.50
- Egress: $0
- Class B operations: 10M × $0.36/M = $3.60
- Total: ~$5.10/month
R2 is roughly two orders of magnitude cheaper than either S3 configuration for this workload. The gap widens as traffic grows. At 50TB/month, S3+CloudFront crosses four thousand dollars; R2 is still under fifty.
The S3 + CloudFront row is the one to dwell on. Adding a CDN does help — it just doesn't help anywhere near enough. The egress problem is structural to S3's pricing model, and a CDN in front of it is a partial fix that still leaves you paying CloudFront's egress rates, which start at $0.085/GB and only drop meaningfully at tens-of-terabytes scale.
What the Migration Actually Looks Like
Because R2 implements the S3 API, the migration is mostly a config change. Here's the shape of it:
-
Create an R2 bucket and an API token. The Cloudflare dashboard does this in two clicks. You get an Access Key ID, Secret Access Key, and an account-specific endpoint URL like
https://<account>.r2.cloudflarestorage.com. -
Copy the bucket.
rcloneis the workhorse here. A single command syncs your S3 bucket to R2, preserving keys and metadata. For terabyte-scale copies, you can run it from an EC2 instance to avoid pulling data over the public internet on the source side.
rclone sync s3:source-bucket r2:destination-bucket --progress
- Update your application's S3 client. Point the SDK at R2's endpoint and feed it the R2 credentials. The code itself doesn't change.
const s3 = new S3Client({
region: "auto",
endpoint: "https://<account>.r2.cloudflarestorage.com",
credentials: { accessKeyId, secretAccessKey },
});
-
Bind a custom domain (optional but recommended). R2 lets you connect a domain you control directly to the bucket, so public URLs look like
https://media.yoursite.com/file.jpginstead of the default R2 hostname. Routing happens through Cloudflare's CDN automatically. -
Delete the S3 bucket once you've verified the migration. This is the step that turns the cost savings into actual savings, and it's the one teams skip out of habit. Don't leave the old bucket sitting there accruing storage charges.
For most projects, this is a half-day of work. The slow parts are the data copy and verifying the cutover; the code changes are trivial.
The Boring Conclusion
Cloud storage was once an AWS-only conversation. It isn't anymore. The S3 API has become a de facto standard the way SMTP became a standard — implemented by every credible provider, no longer a moat for the company that invented it. The differentiator is now pricing structure, and R2's structure happens to fit the static-asset use case almost perfectly.
That doesn't make S3 wrong. It makes S3 a specific tool for specific workloads: data that stays inside AWS, archives that need tiering, regulated workloads that need certifications, write-heavy systems that need decades-old optimization. For a hero image, a podcast file, a software download, a marketing video — all the public-internet workloads where the bill is dominated by people downloading things — R2 is the default and S3 is the legacy.
The reason most teams still pick S3 is the same reason most teams still pick anything: it's what the last team picked. The egress bill arrives later, and by then the architecture has calcified around the assumption. The migration window is now, before the next viral moment, before the next product launch, before the four-hundred-dollar invoice for ninety thousand video views.
Egress used to be the cost of doing business on the internet. R2 made it the cost of choosing the wrong storage backend.