One of the most common system design interview questions. Classic Flipkart Big Billion Day scenario.
The Problem
1 item left in stock
50,000 requests hit simultaneously
Each request:
→ Read inventory: "is stock > 0?" → yes (1 left)
→ Place order
→ Deduct: stock = stock - 1
All 50,000 read stock = 1 simultaneously
All 50,000 see "available"
All 50,000 place orders
Result: 50,000 orders for 1 item → stock = -49,999 ❌
This is a race condition — multiple operations reading and writing shared state simultaneously without coordination.
Why Naive Solutions Fail
Database Transactions Alone
BEGIN TRANSACTION
SELECT stock WHERE product_id = 123 -- returns 1
IF stock > 0:
INSERT order
UPDATE stock = stock - 1
COMMIT
Problem: 50,000 transactions start simultaneously. All read stock = 1 before any of them
commit. All proceed. All try to update. First commit wins, rest still commit too → stock goes deeply
negative.
Standard transactions don’t prevent this because reads happen before locks are acquired.
Cache Check Alone
Check Redis for stock
→ All 50,000 read stock = 1 from Redis
→ All proceed
→ Same race condition, just faster ❌
The Real Solutions
Solution 1 — Optimistic Locking
Add a version number to every inventory record:
inventory: product_id | stock | version
123 | 1 | 47
Every update:
UPDATE inventory
SET stock = stock - 1,
version = version + 1
WHERE product_id = 123
AND version = 47 -- only update if version matches
AND stock > 0
Returns rows_affected = 1 (success) or 0 (someone else got there first).
All 50,000 read: stock = 1, version = 47
All 50,000 attempt UPDATE WHERE version = 47
Request 1 commits → version becomes 48 ✅
Request 2 tries WHERE version = 47 → fails (now 48)
Requests 3-50,000 → all fail
Only 1 order placed ✅
Limitation: Database still receives 50,000 write attempts. Good for correctness, doesn’t solve the load problem.
Solution 2 — Pessimistic Locking
BEGIN TRANSACTION
SELECT stock FROM inventory
WHERE product_id = 123
FOR UPDATE -- locks this row exclusively
IF stock > 0:
INSERT order
UPDATE stock = stock - 1
COMMIT -- lock released
Request 1 acquires lock → proceeds. Requests 2–49,999 wait.
Limitation: 50,000 open database connections waiting for one lock → connection pool exhausted → database crashes → your entire app breaks. ❌
Solution 3 — Redis Atomic Operations ✅
Redis DECR is atomic and Redis is single-threaded — no two commands can interleave.
# Set initial stock
SET "stock:product123" 1
# 50,000 requests each execute:
DECR "stock:product123"
Redis processes one at a time:
Request 1 → DECR → returns 0 → stock was available ✅
Request 2 → DECR → returns -1 → stock gone, INCR back ❌
Request 3 → DECR → returns -2 → stock gone, INCR back ❌
...
After DECR:
→ value >= 0 → proceed with order
→ value < 0 → INCR back → show "sold out"
Or use a Lua script for atomic check-and-decrement:
local stock = redis.call('GET', KEYS[1])
if tonumber(stock) > 0 then
redis.call('DECR', KEYS[1])
return 1 -- proceed
else
return 0 -- sold out
end
Entire check + decrement is one atomic operation. 50,000 requests → 1 succeeds, 49,999 get “sold out” in microseconds. Only 1 hits the database. ✅
Solution 4 — Request Queue ✅
What Flipkart and Amazon actually use for big sales:
50,000 requests arrive
↓
Don't process directly — put all in queue (Kafka / Redis Queue)
↓
Single Order Processor reads ONE at a time
↓
Request 1: stock = 1 → place order → stock = 0 ✅
Request 2: stock = 0 → sold out ❌
Request 3: stock = 0 → sold out ❌
...
Race condition is impossible — single processor. Queue absorbs the spike. Database gets controlled load. First-come-first-served ordering is fair.
User experience:
User clicks buy
→ "You are in queue — position 14,832"
→ If reached before stock runs out → "Order confirmed!"
→ If stock runs out first → "Sorry, sold out"
That waiting room on Flipkart Big Billion Day and BookMyShow is exactly this queue.
Complete Production Architecture
These layers work together:
Layer 1 — Rate Limiting
→ Max 1 request/user/second
→ Eliminates bots and accidental double-clicks
→ 50,000 → ~10,000 legitimate requests
Layer 2 — Redis Atomic DECR
→ Rejects 9,999 with "sold out" in microseconds
→ 1 request proceeds
Layer 3 — Request Queue
→ Successful Redis check → add to order queue
→ Order processor handles one at a time
Layer 4 — Database Optimistic Locking
→ Final safety net
→ Even if two somehow slip through — DB catches it
What Happens to the 49,999 Failed Requests
Product decision as much as a technical one:
| Option | Trade-off |
|---|---|
| Instant sold out | Fast, honest. Best for most cases. |
| Waitlist | If buyer cancels → next in line gets it. More complex. |
| Virtual queue | Show position, build anticipation, reduce frustration. BookMyShow uses this. |
The Core Insight
This problem has two separate challenges:
Challenge 1 — Correctness: only 1 order for 1 item → Solved by Redis atomic ops + optimistic locking
Challenge 2 — Scale: 50,000 requests without crashing → Solved by request queue + rate limiting + Redis as buffer
Most interview candidates solve one. Solving both is what makes the answer complete.
Interview Answer Structure
- Name the problem: “This is a race condition — concurrent reads and writes on shared state”
- Why naive solutions fail: transactions don’t prevent concurrent reads; cache checks have the same race
- Redis atomic solution:
DECRis atomic and single-threaded — guarantees one successful decrement - Queue for load protection: absorbs the spike, protects the database
- Defense in depth: optimistic locking as final safety net, rate limiting to filter bots
- User experience: immediate sold out message or virtual queue
Inventory is CP — consistency is mandatory. You cannot show “available” when it’s sold out.