JM
JordanMarcelino
jordan@deck:~/blog/postgresql-is-almost-all-you-need$cat article.mdx
Reading blog post...
schedule10 min

PostgreSQL Is (Almost) the Only Thing You Need

PostgreSQL can cover far more than “just SQL” search, geo, documents, even some caching if you use the right extensions and patterns.

Jordan Marcelino
Jordan MarcelinoSoftware Engineer
PostgreSQL Is (Almost) the Only Thing You Need

You know that moment when a product request lands and the default reflex is: “We need a new system for that”?

Search? “Elasticsearch.”
Geo? “A separate geospatial store.”
Documents? “MongoDB.”
Caching? “Redis, obviously.”

And then you spend the next three months paying the “integration tax”: new infra, new monitoring, new failure modes, new data consistency headaches, and a new set of people who have to be on-call for it.

If you’ve built enough systems, you eventually notice a pattern: you don’t have a data problem. You have a complexity problem. And more often than not, PostgreSQL can solve the original requirement well enough while keeping your architecture simpler, cheaper, and more reliable.

In this post, you’ll learn where PostgreSQL genuinely shines beyond traditional relational data, how to use it for search/geo/docs/caching-like workloads, and where the “Postgres is all you need” claim breaks down.

Background

PostgreSQL is widely known as a relational database, but modern Postgres is better described as a data platform with a strong relational core. The magic isn’t that it does everything perfectly. It’s that it can do many things well enough with:

  • Mature ACID transactions and constraints
  • A battle-tested query planner
  • Extensions (e.g., PostGIS, pg_trgm)
  • Rich indexing (B-tree, GIN, GiST, BRIN)
  • JSONB support (documents inside relational rows)
  • Practical knobs for durability/performance tradeoffs

A useful framing:

  • Relational tables: truth, integrity, and joins
  • JSONB: flexible “document-ish” substructures when schemas evolve
  • pg_trgm + full-text search: “good search” for many product use cases
  • PostGIS: serious geospatial queries without a separate system
  • Unlogged tables: fast ephemeral storage when you can accept data loss on crash (sometimes “cache-ish”)

The Problem

Teams often adopt specialized systems too early and for the wrong reasons. The result: a toolchain that’s impressive on a diagram and painful in production.

Common failure symptoms:

  • You ship slower because every feature needs cross-system plumbing (ETL, CDC, dual writes, backfills).
  • Your “source of truth” gets fuzzy (which system is correct when they disagree?).
  • On-call gets worse (two systems fail differently, and you debug the boundary).
  • You optimize prematurely (you bought a Ferrari because you once needed to go 120 km/h on a road with speed bumps).

Why this matters:

  • Complexity isn’t neutral. It has compounding interest. If you add systems without strong justification, you are effectively borrowing against future engineering capacity.

So the real question isn’t “Can Postgres do X?”
It’s: “Can Postgres do X well enough at our current scale and constraints and does it reduce total system cost?”

The Approach

Treat PostgreSQL as your default multi-capability datastore, then add specialized systems only when Postgres becomes the bottleneck with evidence (latency SLO misses, cost blowups, query plans that can’t be fixed, operational constraints).

Below is a practical map of the capabilities you listed, with patterns you can apply immediately.

1) Full-text-ish search with pg_trgm (and friends)

If your search needs are “users type messy strings and we want decent matching,” trigram similarity is often enough especially for:

  • name search,
  • product titles,
  • partial matches,
  • typo tolerance.

Example: basic trigram setup

-- Enable extension
CREATE EXTENSION IF NOT EXISTS pg_trgm;

-- Add a GIN index for fast similarity searches
CREATE INDEX CONCURRENTLY IF NOT EXISTS products_title_trgm_idx
ON products USING gin (title gin_trgm_ops);

-- Query: fuzzy match
SELECT id, title
FROM products
WHERE title % 'airfryer philip'
ORDER BY similarity(title, 'airfryer philip') DESC
LIMIT 20;

Practical notes:

  • % uses a similarity threshold (tunable via pg_trgm.similarity_threshold).
  • Use EXPLAIN (ANALYZE, BUFFERS) to confirm index usage.
  • This won’t replace a full search engine for complex ranking, multi-field boosting, heavy aggregations, or “Google-like” relevance. But it replaces a surprising amount of “we need Elasticsearch” requests.

Pitfalls to avoid

  • Forgetting indexes and then blaming Postgres for slow search.
  • Trying to build “search analytics” workloads inside OLTP without planning.

2) Geospatial with PostGIS (real geo, not toy geo)

If you’re dealing with “nearest X,” “within radius,” polygons, or route-ish logic, PostGIS is a first-class solution.

Example: nearest stores within 5km

CREATE EXTENSION IF NOT EXISTS postgis;

-- Store coordinates as geography for distance-in-meters convenience
ALTER TABLE stores
ADD COLUMN IF NOT EXISTS location geography(Point, 4326);

-- Index for fast proximity queries
CREATE INDEX CONCURRENTLY IF NOT EXISTS stores_location_gix
ON stores USING gist (location);

-- Query: within 5km of a point, sorted by distance
SELECT id, name,
       ST_Distance(location, ST_MakePoint(:lng, :lat)::geography) AS meters
FROM stores
WHERE ST_DWithin(location, ST_MakePoint(:lng, :lat)::geography, 5000)
ORDER BY meters
LIMIT 50;

Practical notes:

  • geography is convenient for meters; geometry can be faster for some cases if you manage projections properly.
  • GiST indexes matter. Without them, you’re doing expensive scans.

Pitfalls to avoid

  • Using naive latitude/longitude math in app code.
  • Ignoring index selectivity (e.g., huge radius queries become expensive).

3) Document-ish data with JSONB (without abandoning relational integrity)

JSONB shines when:

  • you have semi-structured attributes that vary by category,
  • your schema evolves frequently,
  • you still want transactions, joins, and constraints around the core entity.

Example: product attributes

ALTER TABLE products
ADD COLUMN IF NOT EXISTS attributes jsonb NOT NULL DEFAULT '{}'::jsonb;

-- Index common access patterns
CREATE INDEX CONCURRENTLY IF NOT EXISTS products_attributes_gin_idx
ON products USING gin (attributes);

-- Query: filter on JSONB fields
SELECT id, title
FROM products
WHERE attributes @> '{"color":"black","wireless":true}';

Use JSONB responsibly:

  • Keep core fields relational (price, status, owner_id, created_at).
  • Put optional/variable fields into JSONB.
  • Avoid turning the entire row into a JSON blob “because Mongo.” That’s not flexibility. That’s giving up constraints and making queries harder.

Pitfalls to avoid

  • Storing unbounded, deeply nested JSON that becomes unindexable and slow.
  • Building everything as JSONB then complaining about reporting queries.

4) “Caching” with unlogged tables (fast, but not a free lunch)

Unlogged tables can behave like a database-side cache because:

  • they avoid WAL logging (faster writes),
  • but their contents are not crash-safe (data can be truncated on restart).

This can be useful for:

  • ephemeral computed results,
  • session-like data (depending on tolerance),
  • expensive aggregation outputs you can recompute.

Example: ephemeral results

CREATE UNLOGGED TABLE IF NOT EXISTS product_search_cache (
  cache_key text PRIMARY KEY,
  payload jsonb NOT NULL,
  created_at timestamptz NOT NULL DEFAULT now()
);

-- Simple TTL cleanup
DELETE FROM product_search_cache
WHERE created_at < now() - interval '10 minutes';

Let’s be blunt: this is not Redis.

  • Redis excels at ultra-low-latency access patterns and huge QPS with low overhead.
  • Unlogged tables still go through SQL execution, locks, and query planning.
  • But for many “we need caching” cases especially inside a single system boundary this can be enough and dramatically simpler.

If you only do one thing: Before adding a new datastore, run a one-week measurement sprint: query plans, indexes, p95 latency, and connection pool tuning. Most “Postgres can’t do it” claims collapse under basic profiling.

5) The decision rule: default Postgres, escalate with evidence

Use this simple escalation ladder:

  1. Fix queries and indexes (EXPLAIN, GIN/GiST, partial indexes, covering indexes).
  2. Use extensions (pg_trgm, PostGIS).
  3. Add sane caching (app cache, materialized views, or limited unlogged patterns).
  4. Scale Postgres (read replicas, partitioning, connection pooling, possibly sharding).
  5. Only then introduce a specialized system and document why.

Practical Walkthrough

Here’s a pragmatic, repeatable plan you can apply for most “we need X system” requests.

  1. Write the requirement as a measurable contract

    • Example: “Search must return results in p95 < 200ms for 1M products; supports typo tolerance; filters: price range + category.”
  2. Prototype in Postgres first

    • Use pg_trgm for fuzzy matching or built-in full-text search (tsvector) for token-based relevance.
    • Use PostGIS for geo queries.
    • Use JSONB for flexible attributes while keeping relational constraints.
  3. Add the right index

    • Search: GIN trigram index on text fields, or GIN on tsvector.
    • Geo: GiST on geography/geometry.
    • JSONB: GIN on JSONB, but only if you have query patterns that benefit.
  4. Benchmark with real-ish data

    • Generate data sizes that match expected growth, not toy datasets.
    • Measure p50/p95 and CPU/IO behavior.
  5. Operationalize

    • Add query observability (slow query logs, pg_stat_statements).
    • Set SLOs and alerts.
  6. Decide

    • If Postgres meets the contract, ship it.
    • If not, document exactly what failed and why a specialized system is justified.

A simple checklist you can paste into your PRD:

  • [ ] Requirement has p95 latency target and scale assumptions.
  • [ ] Index plan exists and is validated with EXPLAIN ANALYZE.
  • [ ] Observability is in place (pg_stat_statements, slow queries).
  • [ ] Failure modes are defined (timeouts, fallbacks).
  • [ ] Decision record written if adding a new datastore.

Conclusion

PostgreSQL can cover far more than “tables and joins.” Used well, it can handle a wide slice of what teams reach for separate systems to solve: fuzzy search, geospatial queries, flexible document attributes, and some cache-like patterns.

Core takeaways:

  • Defaulting to Postgres is an anti-complexity strategy, not a religion.
  • Extensions + indexes are the difference between “Postgres is slow” and “Postgres is fast enough.”
  • JSONB is a tool, not an excuse to abandon schema discipline.
  • Unlogged tables can be useful, but they’re not a Redis replacement.
  • Add new systems only when you have evidence that Postgres cannot meet your measurable requirements.

Your next step: Pick one feature in your current roadmap that someone would normally solve with a new datastore. Prototype it in Postgres with the right extension/index, benchmark it, and write a one-page decision record. If Postgres passes, you just saved months of complexity. If it fails, you now have the data to justify specialization without guessing.

References