Database Infra

Database Scaling: Managed vs Self-Hosted, What Breaks First

Dr. Somya Hallan · Apr 22, 2026 · 22 min read
Database Scaling: Managed vs Self-Hosted, What Breaks First

Most teams don’t think about database scaling until something breaks. A query that used to take 50minutes now takes 3 seconds. The connection pool starts timing out at peak hours. The monthly bill jumps from $500 to $2,000 and nobody can explain exactly why.

At that point, the “managed vs self-hosted” question stops being theoretical. The real question becomes: what specifically is breaking, and is it because of the model you chose or would it break regardless?

In most cases, managed databases break first at system limits such as replica caps, single-writer bottlenecks, and storage constraints you can’t bypass. Self-hosted databases break first at operational complexity involving failover, backups, and scaling systems that your team has to build and maintain.

The right choice isn’t about which model scales further in theory. It’s about what fails earlier for your team, at your stage, with your resources.

The detailed comparison on managed vs self-hosted databases and which is better for your startup has already been covered in our previous article. This blog focuses specifically on database scaling managed vs self-hosted, the hard limits, the exact numbers, what breaks first in each model, and when it makes sense to switch.

We’ll cover real scaling limits from AWS RDS documentation, the operational challenges that actually break self-hosted setups, a stage-based framework for deciding when to move, and a third option that sits between managed and self-hosted that most teams don’t know about.

If your database is starting to feel slow, expensive, or constrained, this is the guide to read before making your next infrastructure decision.

How does database scaling managed vs self-hosted actually differ?

Database scaling managed vs self-hosted differs in one fundamental way: managed databases make scaling easy until you hit a ceiling that you can’t move. Self-hosted databases have no ceiling, but you’re the one building the ladder. The trade-off is convenience vs control, and it shows up the moment your database outgrows its first configuration.

There are two types of scaling, and they work differently depending on which model you’ve chosen.

Vertical scaling (making your machine bigger)

Think of it like moving from a small desk to a bigger one. More space, same room. Easy to do but the room has a maximum size.

  • On managed (RDS): Click a button, pick a larger instance. Downtime may be required during the switch.
  • On self-hosted: Resize your VM. Same process, but you pay raw compute pricing, no vendor markup on the bigger machine.

Horizontal scaling (adding more machines)

Think of it like adding more desks in more rooms. More total capacity, but now you need to coordinate between rooms.

  • On managed (RDS): Add read replicas but you’re capped at 5 for RDS PostgreSQL, or 15 if you upgrade to Aurora. And writes still go through one machine only.
  • On self-hosted: Add as many replicas as your infrastructure supports. Set up sharding if you need to distribute writes. But you build and maintain every connection between them yourself.

Managed vs Self-Hosted Database Scaling: Vertical vs Horizontal Comparison

Here’s how vertical and horizontal scaling differ across managed and self-hosted setups:

Model Vertical (Scale Up) Horizontal (Scale Out)
Managed Easy (click a button. Ceiling exists) Capped: 5 replicas (RDS PG), 15 (Aurora). Writes = single server only.
Self-hosted Easy (resize your VM. No vendor markup) Unlimited (but you build and maintain everything)


Most teams start with vertical scaling because it’s simple. The real differences between managed and self-hosted show up when vertical hits its ceiling and you need to go horizontal, that’s when managed starts imposing limits and self-hosted starts demanding expertise.

What does scaling architecture actually look like in each model?

So far, we’ve talked about scaling in theory, vertical vs horizontal. But what does it actually look like when you’re running a production system?

Managed (RDS / Aurora)

At a high level, most managed setups follow the same pattern:

  • 1 primary database → handles all writes
  • Up to 5–15 read replicas → handle read traffic
  • Connection pooling layer → manages incoming connections

Everything is designed around a single-writer architecture. You can scale reads horizontally (to a point), but writes always go through one machine.

This works well until:

  • Read traffic exceeds replica limits
  • Write traffic becomes the bottleneck
  • Failover events introduce lag or downtime

At that point, you’re not scaling anymore, you’re working around constraints.

If you’re already running into these trade-offs, it’s worth understanding the broader picture in our guide on managed vs self-hosted databases and which is better for your startup.

Self-hosted

Self-hosted architectures look similar at the start but diverge quickly as you scale:

  • Primary + replicas → same as managed, but fully configurable
  • Custom failover setup (Patroni, etc.) → you define how recovery works
  • Load balancing layer → routes traffic across replicas
  • Optional sharding layer → splits data across multiple databases

There’s no imposed ceiling, you can add replicas, introduce sharding, and separate workloads however you want.

But every component comes with responsibility:

  • You configure failover
  • You monitor replication lag
  • You debug issues across multiple nodes

Scaling is no longer a button. It’s a system you build and maintain.
If you’re evaluating whether that trade-off makes sense for your team, the cost and operational breakdown in AWS RDS vs Self-Hosted PostgreSQL: Complete Cost Comparison helps put numbers behind it.

What this means in practice

Managed scaling gives you a pre-built architecture with limits.
Self-hosted gives you a flexible architecture with responsibility.

The moment you need to go beyond a single-writer system or want more control over how traffic is distributed, the difference becomes very real.

Quick comparison: scaling architecture

  • Managed: Fixed architecture, pre-configured scaling, clear limits
  • Self-hosted: Flexible architecture, no hard limits, but fully self-managed

In simple terms, managed gives you a system that works until it doesn’t. Self-hosted gives you a system that can scale indefinitely, if your team can keep up with it.

What are the hard limits of managed database scaling?

Managed databases like AWS RDS PostgreSQL have specific, documented limits: 64 TiB storage, 5 read replicas, single-writer architecture (meaning only one server handles all writes), and you cannot shrink storage once you increase it. These are hard limits, not soft guidelines. And when you hit them, the experience is painful.

Database scaling managed vs self-hosted showing managed database limits like storage cap, replica limit, and single-writer constraint

AWS RDS PostgreSQL Scaling Limits (Real Numbers and Constraints)

Here are the actual scaling limits of AWS RDS PostgreSQL including storage, replicas, and performance constraints:

These aren’t edge cases. These are hard limits.

What Has a Limit The Actual Limit What It Means (Plain English)
Storage 64 TiB max (Aurora: 128 TiB) Your database can’t grow past this. Period.
Read replicas 5 per source (Aurora: 15) You can only spread read traffic across this many copies.
Largest instance m8g.24xlarge — 768 GiB RAM The biggest machine available. There’s nothing larger.
IOPS 256,000 max on io2 volumes The maximum speed your storage can read/write data.
Connections ~5,000 before things slow down Each connection uses 5–10 MB of memory. Past this, the database starts struggling.
Write scaling Single-writer ONLY All writes go through one machine. You cannot split writes across servers.
Storage shrinking Impossible If you increase storage, you’re paying for it forever-even after traffic drops.
Disk modifications 4 per 24 hours You can only resize storage a few times per day-which can hurt during emergencies.

Sources: AWS RDS Quotas and Constraints and PostgreSQL Limits: Appendix K

What this actually feels like when it happens

Numbers in a table are one thing. Living through them is another.

When connections hit the ceiling:

Your application starts throwing timeout errors. Users see “something went wrong” pages. Your on-call engineer scrambles to resize the instance, which requires downtime. The “quick fix” costs 2x your previous compute bill. You’re now paying double for a problem that will come back the next time traffic spikes.

When you need more than 5 read replicas:

Your analytics dashboard, your search feature, and your reporting tool are all reading from the same pool of 5 replicas. One heavy query from the reporting team slows everything down for everyone. There’s no way to add a 6th replica on standard RDS PostgreSQL.

Your options: upgrade to Aurora (higher cost, see how SelfHost compares to AWS RDS, build application-level routing (months of engineering work), or accept the slowdown.

When the single-writer bottleneck hits:

This is the one teams don’t see coming. Everything works fine until write volume crosses a threshold. Then inserts slow down, transactions queue up, and your application starts feeling “laggy” even though read queries are still fast.

The fix isn’t a bigger instance. It’s redesigning your application architecture to shard writes across multiple databases. That’s a multi-month engineering project.

When you can’t shrink storage:

You provisioned 1TB during a data migration. The migration is done. Your actual data is 200GB. You’re paying for 1TB forever, because RDS doesn’t allow you to reduce allocated storage.

The workaround is creating a brand new, smaller instance and migrating everything over. For a production database, that’s a weekend project at minimum.

If the cost side of these limits sounds familiar, we covered it in detail in Why AWS RDS Is Expensive Once Your Product Starts Growing.

For a direct cost comparison with numbers, see AWS RDS vs Self-Hosted PostgreSQL: Complete Cost Comparison.

How does cost scale in managed vs self-hosted databases?

Most teams don’t feel the cost difference at the beginning. It shows up gradually and then all at once.

Early stage (0 1K users)

  • Managed: $50–200/month
  • Self-hosted: Similar or slightly cheaper

At this stage, the difference doesn’t matter. You’re paying for convenience and it’s worth it.

Growth stage (1K 50K users)

  • Managed: $200 → $2,000/month
  • Self-hosted: ~30–50% cheaper on raw compute

This is where costs start diverging.

You scale vertically, add replicas, increase storage — and each step comes with vendor markup.

Most teams notice the bill growing, but still tolerate it because:

  • It’s predictable
  • It requires zero operational effort

If you’re starting to question these rising costs, we broke this down in detail in Why AWS RDS Is Expensive Once Your Product Starts Growing.

Scale stage (50K+ users)

  • Managed: $2,000 → $10,000+/month
  • Self-hosted: Significantly lower infra cost but higher engineering cost

Now the difference becomes impossible to ignore.

Two things start happening at the same time:

  1. You’re paying for over-provisioning
    Larger instances, unused storage, idle replicas
  2. You’re hitting scaling limits
    Replica caps, single-writer bottlenecks

So you’re paying more, while getting less flexibility.

If you’re comparing what that looks like in actual numbers, the breakdown in AWS RDS vs Self-Hosted PostgreSQL: Complete Cost Comparison makes the difference very clear.

The hidden cost most teams miss

Self-hosted isn’t “free.”

If your team spends:

  • 15–20 hours/month on database operations
  • Debugging replication, backups, failover

That engineering time can cancel out the savings.

This is exactly the gap newer models like BYOC try to solve which we explain in What Is BYOC? A Smarter Alternative to Expensive Managed Database.

What this means in practice

  • Managed is cheaper when your time matters more than your infrastructure cost
  • Self-hosted is cheaper when your infrastructure cost becomes a real budget line item

This is why most teams don’t switch early. And why, once they switch late, they wish they had done it sooner.

Quick comparison: cost vs scale

  • Managed: Lower operational effort, but costs increase rapidly with scale
  • Self-hosted: Lower infrastructure cost, but higher engineering overhead

Most teams don’t switch because managed is expensive. They switch when the cost becomes predictable  and unjustifiable.

What breaks first in self-hosted database scaling?

Self-hosted databases don’t have vendor-imposed limits, PostgreSQL itself can scale to billions of rows. What breaks isn’t the database. It’s your team.

The most common failures are operational complexity overwhelming your engineers, failover systems failing during actual incidents, and schema changes slowing down so much that shipping new features becomes painful.

Database scaling managed vs self-hosted comparison showing managed limits and self-hosted operational complexity during scaling

Self-Hosted Database Scaling: What Breaks First (Real-World Failure Points)

Here’s what actually breaks first when you scale a self-hosted database, not in theory, but in production:

What Breaks When It Breaks What Your Team Actually Goes Through
Operational complexity When you go from 1 database to 3+ Someone gets paged at 3 AM. Manual steps in the runbook get skipped under pressure. Human error during scaling causes data inconsistency.
Failover When your primary server actually crashes The high-availability setup you built works in testing but fails during the real incident. Extended downtime while the team debugs under pressure.
Backups When you actually need to restore You set up automated backups 6 months ago. Nobody tested a restore since then. Recovery fails because archiving was silently broken for weeks.
Schema changes When tables exceed 50–100M rows A simple table change takes 2 hours and locks the entire table. You can’t ship a product update without scheduling a maintenance window. Feature velocity drops.
Expertise gaps When the one person who knows Postgres leaves Every scaling decision becomes a research project. The team searches “how to set up replication” at 2 AM during an outage.
Hidden costs When you add up engineering time 20 hours/month managing databases multiplied by your engineering rate sometimes costs MORE than the managed markup you were trying to avoid.


For a broader look at the full trade-off between managed and self-hosted, not just the scaling side, see our managed vs self-hosted database guide.

The counter-intuitive truth most scaling guides miss

There’s a popular discussion on Reddit’s r/Database with 40+ comments titled “we need to stop worrying about INFINITE SCALE.” The core argument: most teams over-engineer for a scale they will never reach, while under-investing in the basics, backups, monitoring, failover, that actually fail in production.

The most common self-hosted database failure isn’t “we couldn’t scale to 10 million users.” It’s “we forgot to test our backup restore process and lost a week of data.”

Before optimizing for theoretical future scale, make sure the fundamentals work under pressure.

When should you switch from managed to self-hosted database scaling?

Switch from managed to self-hosted when your monthly database bill exceeds the cost of a part-time DevOps resource (roughly $3–5K/month), you need PostgreSQL extensions or configurations your managed provider restricts, or you have compliance requirements that demand full infrastructure ownership.

If none of these apply, stay managed, switching too early creates more problems than it solves.

Managed vs Self-Hosted Database Scaling: When to Switch (Stage-Based Framework)

Here’s how to decide between managed, self-hosted, and BYOC based on your stage, team, and database spend:

Your Stage Monthly DB Spend Team What to Do
MVP (< 1K users) Under $200/mo No DevOps Stay managed. Focus on building your product, not infrastructure.
Growth (1K–50K users) $200–2,000/mo Small team, maybe 1 infra person Start tracking database costs monthly. Evaluate when the bill crosses $1,000/mo consistently.
Scale (50K+ users) $2,000–10,000/mo DevOps team exists Seriously evaluate self-hosted or BYOC. The managed markup is now a real budget line item.
Mature (100K+ users) $10,000+/mo Dedicated DBA or infra team Managed is almost certainly overpaying. Self-hosted or BYOC should be your default.

The 5 signals it’s time to move

  1. Your database bill is growing faster than your revenue. You’re scaling costs but not scaling customers proportionally. We covered this pattern in detail in Why AWS RDS is expensive as you scale.
  2. You’ve hit the 5 read replica limit and your application needs more read throughput.
  3. You need PostGIS, TimescaleDB, or other extensions that RDS doesn’t support or restricts.
  4. Your compliance team is asking where exactly the data lives and who has access to the underlying infrastructure.
  5. You’re paying for storage you can’t shrink and it’s adding up month over month.

For a side-by-side cost comparison with real numbers, see AWS RDS vs Self-Hosted PostgreSQL: Complete Cost Comparison.

If you’re specifically comparing managed providers, our comparison pages cover SelfHost vs Neon, SelfHost vs Supabase, and SelfHost vs DigitalOcean.

What scaling actually looks like as your product grows (real example)

Let’s make this concrete.

Imagine a SaaS product growing from early traction to scale.

Database scaling managed vs self-hosted showing growth stages from early to scaling wall and performance breakdown

Stage 1: Early traction (0 5K users)

  • Single RDS instance
  • Minimal load
  • Everything feels fast

No scaling decisions needed. Managed works perfectly.

Stage 2: Growth (5K 25K users)

  • Add 1–2 read replicas
  • Increase instance size
  • Connection pooling becomes important

Things still work but you start noticing:

  • Occasional slow queries
  • Higher monthly bills

Scaling is still easy. You just click “upgrade.”

Stage 3: Pressure starts building (25K 75K users)

  • All 5 read replicas in use
  • Analytics queries competing with product traffic
  • Write latency starts creeping up

Now the cracks appear:

  • You can’t add more replicas
  • One heavy query slows everything down
  • Writes feel slower even though reads are fine

This is where most teams realize:
the architecture itself is becoming the bottleneck.

If you’re seeing this pattern, it’s the same stage where teams start comparing alternatives like SelfHost vs AWS RDS or evaluating whether to move off managed entirely.

Stage 4: Scaling wall (75K+ users)

  • Write throughput becomes the limiting factor
  • Increasing instance size gives diminishing returns
  • Costs spike aggressively

At this point, your options are no longer simple:

  • Move to Aurora (higher cost)
  • Redesign your system for sharding
  • Migrate off managed entirely

None of these are quick fixes.

What happens if you were self-hosted?

You would hit different problems:

  • Failover issues under load
  • Replication lag between nodes
  • Operational overhead scaling with complexity

But you wouldn’t be blocked by hard limits.

If you’re exploring ways to reduce that operational burden without giving up control, this is exactly where the PostgreSQL MCP Server guide becomes relevant.

The key takeaway

Managed databases fail at the system level: hard limits you can’t bypass.
Self-hosted databases fail at the team level: complexity you have to handle.

Scaling isn’t just about how much load your database can handle.
It’s about what breaks first, your infrastructure, or your team.

When NOT to switch

If you’re spending under $500/month on your database and your team has no infrastructure experience, do not self-host for ideological reasons. The operational cost will exceed the savings within the first month. Build your product first. Optimize infrastructure later.

The third option for database scaling managed vs self-hosted teams miss

BYOC (Bring Your Own Cloud) is a model where your database runs in your own AWS account, on your EC2 instances, in your VPC, with your encryption keys but a vendor handles all the operational work: backups, monitoring, failover, patching, scaling. You keep the cost control of self-hosting. You skip the operational burden.

Managed vs Self-Hosted vs BYOC: Database Scaling Comparison

Here’s how managed, self-hosted, and BYOC compare across cost, control, and scaling limits:

There are really only three ways to run databases today.

The Problem Managed Self-Hosted BYOC
Cost at scale High (vendor markup on every resource) Low (raw compute pricing) Low (raw compute + flat management fee)
Operational burden Zero (vendor handles everything) All on your team- 24/7 Handled by vendor
Infrastructure ownership Vendor owns it You own it You own it
Scaling limits Vendor-imposed (5 replicas, single-writer) Your cloud limits only Your cloud limits only
Vendor lock-in High (migration is complex) None Low (database stays in your account if you leave)


In simple terms: BYOC gives you the pricing of self-hosting with the convenience of managed. You own the infrastructure, someone else keeps it running.

This is the model SelfHost is built around. Your database runs on your EC2 instances, in your VPC, with your encryption keys. SelfHost handles the operations layer, so your team doesn’t have to.

This isn’t the right model for everyone. If you’re at the MVP stage spending $50/month on a hobby database, BYOC is overkill. But for teams in the $2,000–10,000/month range, where managed is getting expensive but self-hosting is too complex, BYOC removes the deadlock.

To understand how BYOC works in detail, see What Is BYOC? A Smarter Alternative to Expensive Managed Databases. If you’re weighing BYOC against staying on RDS, see how SelfHost compares to AWS RDS and SelfHost vs Railway

Should you self-host your database? A quick decision checklist

Self-host if you have a dedicated infrastructure person, your database bill exceeds $3K/month, and you need configuration access your managed provider restricts. If you don’t have all three, consider BYOC or stay managed. Moving too early costs more than staying too long.

Self-host if ALL of these are true:

  • You have at least 1 DevOps or infrastructure engineer (or equivalent)
  • Your monthly database spend exceeds $3,000
  • You need custom extensions or deep PostgreSQL configuration access
  • You’re comfortable being on-call for database incidents

Stay managed if ANY of these are true:

  • Your team has no infrastructure expertise
  • Your database spend is under $500/month
  • You’re still finding product-market fit
  • You’d rather ship features than manage database infrastructure

Consider BYOC if:

  • You want self-hosted pricing with managed convenience
  •  You’re spending $2K–10K/month on databases
  • Compliance requires you to own the infrastructure
  • You don’t want to hire a full-time DBA

Not sure which model fits your setup? Our PostgreSQL MCP Server guide covers how AI-native tools are changing the way teams manage databases, reducing the operational burden that makes self-hosting hard in the first place.

Which database is best for scalability?

PostgreSQL handles both vertical and horizontal scaling well. For managed setups, Aurora offers the highest read replica count (15 vs standard RDS PostgreSQL’s 5). For self-hosted, PostgreSQL combined with tools like Patroni for high availability and Citus for horizontal scaling can handle very large workloads. The best choice depends on your architecture and team, not the engine alone.

What are the three types of database scaling?

Vertical: making your existing server bigger (more CPU, RAM, storage).
Horizontal: adding more servers that share the workload (read replicas, sharding).
Functional: separating different types of work to different databases (for example, analytics queries on one database, transactions on another).
Most teams start vertical, move to horizontal when vertical hits its ceiling, and eventually need functional separation.

Can managed databases scale indefinitely?

No. Every managed platform has hard limits. RDS PostgreSQL caps at 64 TiB storage, 5 read replicas, and single-writer architecture. Aurora extends some of these but at higher cost. “Auto-scaling” sounds unlimited but operates within fixed, documented boundaries.
We broke these limits down with real numbers earlier in this guide, and in more detail in AWS RDS vs Self-Hosted PostgreSQL: Complete Cost Comparison

Is self-hosting cheaper than managed at scale?

Usually, raw EC2 compute is roughly 50% cheaper than equivalent RDS instances. But factor in engineering time: if your team spends 20+ hours/month managing the database, that labor cost can exceed the managed markup you were trying to avoid. For a detailed breakdown with real numbers, see AWS RDS vs Self-Hosted PostgreSQL: Complete Cost Comparison

What’s more reliable, self-hosted databases or managed services?

It depends on your team. Managed services offer built-in redundancy, automated failover, and guaranteed uptime SLAs but you’re limited to the provider’s architecture. Self-hosted gives you full control over reliability design, but only works if your team has the expertise to build and maintain high availability. Most outages in self-hosted setups come from human error, not hardware failure.

What are the cons of self-hosting a database?

The main cons are operational complexity (backups, failover, patching, monitoring become your responsibility), the need for dedicated infrastructure expertise on your team, hidden labor costs that can exceed managed pricing, and the risk of data loss if backup and recovery processes aren’t properly tested and maintained.

What is BYOC for databases?

BYOC (Bring Your Own Cloud) means your database runs in your own cloud account, your VPC, your encryption keys, your bill, but a vendor handles backups, monitoring, failover, and maintenance. You keep cost control and infrastructure ownership without the operational burden. It sits between fully managed and fully self-hosted. Learn more in our complete BYOC guide 

What should you actually do?

If you’re still deciding, here’s the simplest way to think about it:

  • If you’re early-stage and want to move fast → stay managed
  • If you’re scaling and your database is becoming a real cost center → evaluate self-hosted or BYOC
  • If you want control without operational overhead → consider BYOC

Most teams don’t get this decision wrong because they chose the wrong model. They get it wrong because they switched at the wrong time.

The database scaling managed vs self-hosted debate isn’t about which model is “better.” It’s about what breaks first for your specific team, at your specific scale, with your specific resources.

If you’re on managed and hitting limits – now you know exactly what those limits are and what they cost.

If you’re considering self-hosting – now you know the operational reality, not just the pricing spreadsheet.

If both feel like a compromise – BYOC exists for exactly this reason.

The worst decision isn’t choosing managed or self-hosted. It’s switching at the wrong time, either too early (before you have the team to support it) or too late (after you’ve already paid the markup for years).

Explore SelfHost: managed PostgreSQL in your own cloud. Production-ready in under 2 minutes.