Skip to main content

SD - Cache

· 3 min read

Types of Caches

1. Cache-Aside (Lazy Loading)

Read: (Update cache on miss)
User -> Read -> Cache
-> (Cache Miss) -> Database
-> Store in Cache -> Return Data

Write:
User -> Write to Database
-> (Optional) Invalidate Cache

At any point:
Database = 100 records
Cache = Only hot/active 10 records
  • Cache is populated lazily on read misses
  • Cache holds only frequently accessed items

Pro: Saves memory; ideal for read-heavy systems
Con: Cache can become stale if not properly invalidated after database updates.

Use Case: Product catalog in e-commerce. Read heavy. Not all products required to be cached

tip

Cache-aside
App will check the fridge. If it's empty, app will go get groceries and fill it.

Read-through
App will ask the cache. If cache is empty, cache will go get groceries and fill it.

2. Write-Through Cache

Read:
User -> Read -> Cache
-> Return Data

Write: (Both write synchronous)
User -> Write -> Cache
-> Write to Database

At any point:
Database = 100 records
Cache = Full mirror of 100 records
  • Cache and DB always in sync. Cache is updated on every write.

Pro: Guarantees data consistency between cache and database.
Con: Higher write latency due to dual write (cache + db).

Use Case: Banking systems. Must reflect latest balances; consistency > latency.

3. Write-Back Cache

Read:
User -> Read -> Cache
-> Return Data

Write: (Async DB update)
User -> Write -> Cache
-> (Async) Write to Database

At any point:
Database = ~90 records (lagging behind due to async writes)
Cache = 100 records (source of truth)
  • Cache is updated immediately
  • Database is updated asynchronously (batched or delayed)
  • Risk of data loss if cache crashes before db update

Pro: Ideal for write-heavy scenarios; Optimizes writes by batching database updates.
Use Case: Real-time analytics dashboard. Temporary data loss is acceptable. Throughput > durability.

4. Redis Pub/Sub Caching (complex)

Read: (Real-time listening)
User -> Subscribes to a Channel
-> Receives Updates Instantly

Write: (Broadcast updates)
Producer -> Publish to Channel
-> All Subscribers Receive Instantly

At any point:
Database = Optional or external for persistence
Cache = 0 records (Holds no state; transient message bus)
  • Messages are not stored—only in-flight delivery
  • If a user isn't listening when the message is published, they miss it

Pro: Ultra low-latency updates to many clients simultaneously Con: No message persistence - missed updates are permanently lost.

Use Case: Live sports scoring system, Only latest score matters

tip

Redis Streams is a durable alternative to Pub/Sub

6. Distributed Caching 🚀

Read: (Multi-node lookup)
User -> Read -> Cache Node 1
-> (Miss) -> Cache Node 2
-> (Miss) -> Database
-> Store in Appropriate Cache Node

Write: (Node synchronization)
User -> Write -> Primary Cache Node
-> Sync to Secondary Nodes
-> Write to Database

At any point:
Database = 100 records
Cache = 100 records distributed across nodes:
- shards: node A = 30, node B = 70
- replicas: all nodes hold full copies (100 records each)
- serves read traffic

Pro: Horizontally scalable and available with support for millions of concurrent users. High available, low latency and geographic scale.
Con: Complex consistency and synchronization between cache nodes.
Use Case: Global session management system