Concurrency & Async
Threading, synchronization, and async patterns
Process vs Thread
What is the difference between a process and a thread?
Process:
Independent execution unit with its own memory space.
┌─────────────────────────┐
│ Process │
│ ┌───────────────────┐ │
│ │ Memory Space │ │
│ │ Code, Data, Heap │ │
│ └───────────────────┘ │
│ ┌───────────────────┐ │
│ │ Resources │ │
│ │ Files, Sockets │ │
│ └───────────────────┘ │
└─────────────────────────┘
Thread:
Lightweight execution unit within a process, sharing memory.
┌─────────────────────────────────────┐
│ Process │
│ ┌─────────────────────────────┐ │
│ │ Shared Memory │ │
│ │ Code, Data, Heap │ │
│ └─────────────────────────────┘ │
│ ┌────────┐ ┌────────┐ ┌────────┐ │
│ │Thread 1│ │Thread 2│ │Thread 3│ │
│ │ Stack │ │ Stack │ │ Stack │ │
│ └────────┘ └────────┘ └────────┘ │
└─────────────────────────────────────┘
Comparison:
| Aspect | Process | Thread |
|---|---|---|
| Memory | Separate | Shared |
| Creation | Expensive | Cheap |
| Communication | IPC (pipes, sockets) | Direct memory |
| Isolation | High | Low |
| Crash impact | Isolated | Can crash process |
| Context switch | Expensive | Cheaper |
When to Use:
Processes:
- Isolation needed (crash protection)
- Running untrusted code
- Different users/permissions
- Multi-core utilization (Python GIL)
Threads:
- Shared state needed
- Lower overhead
- Same security context
- I/O parallelism
Example:
# Process
from multiprocessing import Process
def worker():
print("Worker process")
p = Process(target=worker)
p.start()
p.join()
# Thread
from threading import Thread
def worker():
print("Worker thread")
t = Thread(target=worker)
t.start()
t.join()
Key Points to Look For:
- Knows memory sharing difference
- Understands isolation trade-off
- Can choose appropriately
Follow-up: How does the Python GIL affect this choice?
Thread lifecycle and states
What are the states in a thread's lifecycle?
Thread States:
┌─────────┐
│ NEW │
└────┬────┘
│ start()
┌────▼────┐
┌────│ RUNNABLE│────┐
│ └────┬────┘ │
│ │ │
│ ┌────▼────┐ │
│ │ RUNNING │ │
│ └────┬────┘ │
│ │ │ │ │
│ │ │ │ │
┌───▼──┐ │ ┌──▼───┐│ ┌──▼────┐
│BLOCKED│ │ │WAITING││ │TIMED_ │
└───┬──┘ │ └──┬───┘│ │WAITING│
│ │ │ │ └──┬────┘
└────┴────┴────┴────┘
│
┌────▼─────┐
│TERMINATED│
└──────────┘
States Explained:
1. NEW:
Thread created but not started.
Thread t = new Thread(runnable);
// t is in NEW state
2. RUNNABLE:
Ready to run, waiting for CPU.
t.start();
// t is RUNNABLE (may or may not be running)
3. RUNNING:
Currently executing (subset of RUNNABLE).
4. BLOCKED:
Waiting to acquire a lock.
synchronized (lock) {
// Another thread holds lock
// This thread is BLOCKED
}
5. WAITING:
Waiting indefinitely for another thread.
object.wait(); // Waiting for notify()
thread.join(); // Waiting for thread to finish
LockSupport.park();
6. TIMED_WAITING:
Waiting for specified time.
Thread.sleep(1000);
object.wait(1000);
thread.join(1000);
7. TERMINATED:
Thread has finished execution.
Example:
Thread thread = new Thread(() -> {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
// Handle
}
});
System.out.println(thread.getState()); // NEW
thread.start();
System.out.println(thread.getState()); // RUNNABLE
Thread.sleep(100);
System.out.println(thread.getState()); // TIMED_WAITING
thread.join();
System.out.println(thread.getState()); // TERMINATED
Key Points to Look For:
- Knows all states
- Understands transitions
- Distinguishes WAITING vs BLOCKED
Follow-up: What's the difference between BLOCKED and WAITING?
Creating threads: different approaches
What are the different ways to create threads?
1. Extending Thread Class:
class MyThread extends Thread {
@Override
public void run() {
System.out.println("Running in: " + getName());
}
}
MyThread t = new MyThread();
t.start();
2. Implementing Runnable:
class MyRunnable implements Runnable {
@Override
public void run() {
System.out.println("Running");
}
}
Thread t = new Thread(new MyRunnable());
t.start();
// Or with lambda
Thread t = new Thread(() -> System.out.println("Running"));
t.start();
3. Implementing Callable (Returns Value):
class MyCallable implements Callable<Integer> {
@Override
public Integer call() {
return 42;
}
}
ExecutorService executor = Executors.newSingleThreadExecutor();
Future<Integer> future = executor.submit(new MyCallable());
Integer result = future.get(); // 42
4. Using ExecutorService:
ExecutorService executor = Executors.newFixedThreadPool(4);
// Submit Runnable
executor.execute(() -> System.out.println("Task"));
// Submit Callable
Future<String> future = executor.submit(() -> "Result");
executor.shutdown();
5. Using CompletableFuture:
CompletableFuture.supplyAsync(() -> {
return "Hello";
}).thenApply(s -> {
return s + " World";
}).thenAccept(System.out::println);
Comparison:
| Approach | Return Value | Exception | Reuse |
|---|---|---|---|
| Thread | No | No | No |
| Runnable | No | No | Yes |
| Callable | Yes | Yes | Yes |
| ExecutorService | Optional | Yes | Pool |
| CompletableFuture | Yes | Yes | Chain |
Best Practice:
// Prefer ExecutorService over raw threads
// Better resource management
// Thread pool reuse
ExecutorService executor = Executors.newCachedThreadPool();
try {
// Submit tasks
} finally {
executor.shutdown();
}
Key Points to Look For:
- Knows multiple approaches
- Prefers Runnable/Callable over extending Thread
- Uses ExecutorService
Follow-up: Why is implementing Runnable preferred over extending Thread?
Thread pooling: why and how
Why use thread pools? How do you configure them?
Why Thread Pools:
Without Pool:
// Bad: Creates thread per request
void handleRequest(Request req) {
new Thread(() -> process(req)).start();
}
// 10000 requests = 10000 threads = OOM
With Pool:
ExecutorService pool = Executors.newFixedThreadPool(10);
void handleRequest(Request req) {
pool.submit(() -> process(req));
}
// 10000 requests = 10 threads, tasks queued
Benefits:
1. Resource management: Bounded threads
2. Performance: Reuse threads (no creation overhead)
3. Queueing: Tasks wait when threads busy
4. Monitoring: Track pool metrics
Pool Types:
Fixed Thread Pool:
ExecutorService fixed = Executors.newFixedThreadPool(10);
// Fixed number of threads
// Unbounded queue
Cached Thread Pool:
ExecutorService cached = Executors.newCachedThreadPool();
// Creates threads as needed
// Reuses idle threads
// Good for short-lived tasks
Single Thread:
ExecutorService single = Executors.newSingleThreadExecutor();
// Sequential execution
// Good for thread-unsafe resources
Scheduled Pool:
ScheduledExecutorService scheduled = Executors.newScheduledThreadPool(2);
scheduled.scheduleAtFixedRate(task, 0, 1, TimeUnit.SECONDS);
Custom Pool:
ThreadPoolExecutor pool = new ThreadPoolExecutor(
5, // Core pool size
10, // Max pool size
60, TimeUnit.SECONDS, // Keep-alive time
new LinkedBlockingQueue<>(100), // Queue
new ThreadPoolExecutor.CallerRunsPolicy() // Rejection policy
);
Rejection Policies:
// When queue is full and max threads reached:
AbortPolicy // Throws exception (default)
CallerRunsPolicy // Caller thread executes
DiscardPolicy // Silently discard
DiscardOldestPolicy // Discard oldest, try again
Sizing Guidelines:
CPU-bound tasks: threads = CPU cores
I/O-bound tasks: threads = CPU cores * (1 + wait_time/compute_time)
Key Points to Look For:
- Knows why pooling is important
- Can configure custom pool
- Understands rejection policies
Follow-up: How do you decide pool size for I/O-bound tasks?
Daemon threads vs User threads
What is the difference between daemon and user threads?
User Threads (Default):
JVM waits for all user threads to complete before exit.
Daemon Threads:
Background threads that don't prevent JVM exit.
Thread daemon = new Thread(() -> {
while (true) {
// Background work
}
});
daemon.setDaemon(true); // Must set before start()
daemon.start();
// Main thread exits → daemon killed automatically
Comparison:
| Aspect | User Thread | Daemon Thread |
|---|---|---|
| JVM exit | Waits for completion | Doesn't wait |
| Use case | Main application work | Support/background |
| Creation | Default | setDaemon(true) |
| Priority | Normal | Often lower |
Use Cases:
Daemon Threads:
- Garbage collection
- Background logging
- Cache cleanup
- Heartbeat monitoring
User Threads:
- Request handling
- Transaction processing
- Any critical work
Example:
public class DaemonExample {
public static void main(String[] args) {
Thread daemon = new Thread(() -> {
while (true) {
System.out.println("Daemon running...");
try {
Thread.sleep(500);
} catch (InterruptedException e) {
break;
}
}
});
daemon.setDaemon(true);
daemon.start();
System.out.println("Main thread sleeping...");
Thread.sleep(2000);
System.out.println("Main thread exiting");
// Daemon thread killed automatically
}
}
Important Notes:
1. Must call setDaemon() before start()
2. Thread created by daemon is also daemon
3. Daemon threads shouldn't hold resources needing cleanup
Key Points to Look For:
- Knows JVM exit behavior
- Appropriate use cases
- Understands resource implications
Follow-up: What happens to daemon thread resources when JVM exits?
Synchronization
Race conditions: what and how to prevent
What is a race condition? How do you prevent it?
Race Condition:
When program behavior depends on the timing of thread execution.
Example:
class Counter {
private int count = 0;
void increment() {
count++; // Not atomic!
// 1. Read count
// 2. Add 1
// 3. Write count
}
}
// Two threads incrementing:
// Thread A: Read 0
// Thread B: Read 0
// Thread A: Write 1
// Thread B: Write 1
// Expected: 2, Actual: 1
Prevention Methods:
1. Synchronized Block:
class Counter {
private int count = 0;
private final Object lock = new Object();
void increment() {
synchronized (lock) {
count++;
}
}
}
2. Synchronized Method:
class Counter {
private int count = 0;
synchronized void increment() {
count++;
}
}
3. Atomic Variables:
class Counter {
private AtomicInteger count = new AtomicInteger(0);
void increment() {
count.incrementAndGet();
}
}
4. Locks:
class Counter {
private int count = 0;
private final Lock lock = new ReentrantLock();
void increment() {
lock.lock();
try {
count++;
} finally {
lock.unlock();
}
}
}
5. Immutability:
// Immutable objects can't have race conditions
final class ImmutablePoint {
private final int x, y;
ImmutablePoint(int x, int y) {
this.x = x;
this.y = y;
}
}
Common Race Condition Patterns:
- Check-then-act
- Read-modify-write
- Compound operations
Key Points to Look For:
- Understands the problem
- Knows multiple solutions
- Mentions atomic operations
Follow-up: How do you detect race conditions?
Mutex vs Semaphore
What is the difference between a mutex and a semaphore?
Mutex (Mutual Exclusion):
Binary lock - only one thread can hold it.
// Only one thread at a time
Lock mutex = new ReentrantLock();
void criticalSection() {
mutex.lock();
try {
// Only one thread here
} finally {
mutex.unlock();
}
}
Semaphore:
Counter - allows N threads to hold permits.
// Up to 3 threads at a time
Semaphore semaphore = new Semaphore(3);
void limitedAccess() {
semaphore.acquire(); // Decrement, block if 0
try {
// Up to 3 threads here
} finally {
semaphore.release(); // Increment
}
}
Comparison:
| Aspect | Mutex | Semaphore |
|---|---|---|
| Permits | 1 (binary) | N (counting) |
| Ownership | Yes (only owner unlocks) | No |
| Use case | Exclusive access | Limited concurrent access |
| Reentrant | Often yes | No |
Use Cases:
Mutex:
- Protecting shared resource
- Critical sections
- Single-writer access
Semaphore:
- Connection pool (limit to N connections)
- Rate limiting
- Producer-consumer (bounded buffer)
Example - Connection Pool:
class ConnectionPool {
private final Semaphore semaphore;
private final Queue<Connection> connections;
ConnectionPool(int size) {
semaphore = new Semaphore(size);
connections = new LinkedList<>();
// Initialize connections
}
Connection acquire() throws InterruptedException {
semaphore.acquire(); // Wait for permit
synchronized (connections) {
return connections.poll();
}
}
void release(Connection conn) {
synchronized (connections) {
connections.offer(conn);
}
semaphore.release(); // Return permit
}
}
Binary Semaphore vs Mutex:
Binary semaphore (1 permit) is similar but:
- No ownership concept
- Any thread can release
- Not reentrant
Key Points to Look For:
- Knows permit count difference
- Understands ownership
- Can give use cases
Follow-up: Can a semaphore be used as a mutex?
Deadlock: conditions and prevention
What causes deadlock? How do you prevent it?
Deadlock:
Two or more threads waiting for each other, none can proceed.
Thread A holds Lock 1, waits for Lock 2
Thread B holds Lock 2, waits for Lock 1
Thread A ──holds──→ Lock 1
│ ↑
│ waits │ waits
↓ │
Lock 2 ←──holds── Thread B
Four Conditions (ALL required):
1. Mutual Exclusion:
Resources cannot be shared.
2. Hold and Wait:
Thread holds resource while waiting for another.
3. No Preemption:
Resources cannot be forcibly taken.
4. Circular Wait:
Circular chain of threads waiting.
Example:
Object lock1 = new Object();
Object lock2 = new Object();
// Thread A
synchronized (lock1) {
Thread.sleep(100);
synchronized (lock2) { // Waits for B
// Work
}
}
// Thread B
synchronized (lock2) {
Thread.sleep(100);
synchronized (lock1) { // Waits for A
// Work
}
}
Prevention Strategies:
1. Lock Ordering:
// Always acquire locks in same order
void transfer(Account from, Account to, int amount) {
Account first = from.id < to.id ? from : to;
Account second = from.id < to.id ? to : from;
synchronized (first) {
synchronized (second) {
// Safe transfer
}
}
}
2. Lock Timeout:
if (lock.tryLock(1, TimeUnit.SECONDS)) {
try {
// Work
} finally {
lock.unlock();
}
} else {
// Handle timeout
}
3. Single Lock:
// Use one lock for related resources
synchronized (accountLock) {
// Access both accounts
}
4. Deadlock Detection:
// JVM can detect deadlocks
ThreadMXBean bean = ManagementFactory.getThreadMXBean();
long[] deadlockedThreads = bean.findDeadlockedThreads();
Key Points to Look For:
- Knows all four conditions
- Has prevention strategies
- Can implement lock ordering
Follow-up: How do you debug deadlocks in production?
Livelock and starvation
What are livelock and starvation? How do they differ from deadlock?
Deadlock: Threads blocked, not making progress.
Livelock: Threads active but not making progress.
Starvation: Thread never gets resources.
Livelock Example:
Two people in hallway, both step aside:
Person A: Steps left
Person B: Steps left
Person A: Steps right
Person B: Steps right
... forever ...
// Both threads keep retrying, never succeed
class LivelockExample {
Resource resource = new Resource();
void worker1() {
while (true) {
if (resource.isAvailable()) {
if (resource.tryAcquire()) {
// Use resource
resource.release();
return;
}
}
// Be "polite" - wait
Thread.sleep(100);
}
}
void worker2() {
// Same logic - both keep waiting for each other
}
}
Livelock Prevention:
// Add randomness
void worker() {
Random random = new Random();
while (true) {
if (resource.isAvailable()) {
if (resource.tryAcquire()) {
// Use resource
resource.release();
return;
}
}
// Random backoff
Thread.sleep(random.nextInt(100));
}
}
Starvation:
Thread never gets CPU or resources.
// Low priority thread never runs
Thread highPriority = new Thread(() -> {
while (true) { /* CPU intensive */ }
});
highPriority.setPriority(Thread.MAX_PRIORITY);
Thread lowPriority = new Thread(() -> {
// Rarely gets CPU time
});
lowPriority.setPriority(Thread.MIN_PRIORITY);
Starvation Prevention:
1. Fair locks:
ReentrantLock fairLock = new ReentrantLock(true);
// FIFO ordering
- Avoid priority inversion
- Resource allocation policies
Comparison:
| Issue | State | Progress | Solution |
|---|---|---|---|
| Deadlock | Blocked | None | Lock ordering |
| Livelock | Active | None (busy) | Random backoff |
| Starvation | Ready/Blocked | Others progress | Fair scheduling |
Key Points to Look For:
- Distinguishes all three
- Knows prevention techniques
- Understands fair locks
Follow-up: What is priority inversion?
Read-write locks
What are read-write locks? When would you use them?
Read-Write Lock:
Multiple readers OR one writer.
Readers: Can run concurrently (shared access)
Writers: Exclusive access (blocks all)
Without Read-Write Lock:
class Cache {
private Map<String, Object> map = new HashMap<>();
private final Object lock = new Object();
Object get(String key) {
synchronized (lock) { // Blocks other readers!
return map.get(key);
}
}
void put(String key, Object value) {
synchronized (lock) {
map.put(key, value);
}
}
}
With Read-Write Lock:
class Cache {
private Map<String, Object> map = new HashMap<>();
private final ReadWriteLock lock = new ReentrantReadWriteLock();
private final Lock readLock = lock.readLock();
private final Lock writeLock = lock.writeLock();
Object get(String key) {
readLock.lock();
try {
return map.get(key); // Multiple readers OK
} finally {
readLock.unlock();
}
}
void put(String key, Object value) {
writeLock.lock();
try {
map.put(key, value); // Exclusive access
} finally {
writeLock.unlock();
}
}
}
Comparison:
| Scenario | Mutex | Read-Write Lock |
|---|---|---|
| Read + Read | Sequential | Concurrent |
| Read + Write | Sequential | Sequential |
| Write + Write | Sequential | Sequential |
When to Use:
- Read-heavy workloads
- Reads significantly outnumber writes
- Read operations are non-trivial
When NOT to Use:
- Write-heavy workloads
- Very short read operations (lock overhead)
- Simple operations
Fairness:
// Fair read-write lock
ReadWriteLock fairLock = new ReentrantReadWriteLock(true);
// Writers won't be starved by continuous readers
StampedLock (Java 8+):
StampedLock sl = new StampedLock();
// Optimistic read (no lock)
long stamp = sl.tryOptimisticRead();
int x = this.x;
int y = this.y;
if (!sl.validate(stamp)) {
// Changed, get real lock
stamp = sl.readLock();
try {
x = this.x;
y = this.y;
} finally {
sl.unlockRead(stamp);
}
}
Key Points to Look For:
- Knows concurrent reads benefit
- Understands write exclusivity
- Mentions StampedLock
Follow-up: What is writer starvation in read-write locks?
Lock-free programming basics
What is lock-free programming? How does it work?
Lock-Free:
At least one thread makes progress (no deadlock).
Compare-And-Swap (CAS):
Atomic operation: "If current value == expected, set to new value"
// Pseudocode
boolean CAS(memory, expected, newValue) {
if (memory.value == expected) {
memory.value = newValue;
return true;
}
return false;
}
AtomicInteger Example:
AtomicInteger count = new AtomicInteger(0);
// Lock-free increment
void increment() {
int current, next;
do {
current = count.get();
next = current + 1;
} while (!count.compareAndSet(current, next));
// Retry if another thread changed it
}
// Or use built-in
count.incrementAndGet(); // Uses CAS internally
Lock-Free Stack:
class LockFreeStack<T> {
private final AtomicReference<Node<T>> top = new AtomicReference<>();
void push(T value) {
Node<T> newNode = new Node<>(value);
Node<T> currentTop;
do {
currentTop = top.get();
newNode.next = currentTop;
} while (!top.compareAndSet(currentTop, newNode));
}
T pop() {
Node<T> currentTop;
Node<T> newTop;
do {
currentTop = top.get();
if (currentTop == null) return null;
newTop = currentTop.next;
} while (!top.compareAndSet(currentTop, newTop));
return currentTop.value;
}
}
ABA Problem:
Thread 1: Read A
Thread 2: Change A → B → A
Thread 1: CAS succeeds (sees A), but state changed!
Solution: AtomicStampedReference
AtomicStampedReference<Node> ref = new AtomicStampedReference<>(node, 0);
int[] stampHolder = new int[1];
Node current = ref.get(stampHolder);
int stamp = stampHolder[0];
// CAS with stamp
ref.compareAndSet(current, newNode, stamp, stamp + 1);
Benefits:
- No deadlocks
- Better scalability
- Lower overhead (sometimes)
Drawbacks:
- Complex to implement
- Harder to reason about
- Can have livelock
- ABA problem
Key Points to Look For:
- Understands CAS operation
- Knows ABA problem
- Can implement basic structure
Follow-up: What is the difference between lock-free and wait-free?
Async Patterns
Async/Await: how it works under the hood
How does async/await work under the hood?
Async/Await:
Syntactic sugar for writing asynchronous code that looks synchronous.
Without Async:
function fetchUserData(userId) {
return fetch(`/users/${userId}`)
.then(response => response.json())
.then(user => fetch(`/posts?userId=${user.id}`))
.then(response => response.json())
.then(posts => ({ user, posts }));
}
With Async/Await:
async function fetchUserData(userId) {
const response = await fetch(`/users/${userId}`);
const user = await response.json();
const postsResponse = await fetch(`/posts?userId=${user.id}`);
const posts = await postsResponse.json();
return { user, posts };
}
Under the Hood:
1. State Machine:
Compiler transforms async function into state machine.
// Conceptual transformation
function fetchUserData(userId) {
return new Promise((resolve, reject) => {
let state = 0;
let user, response;
function step(value) {
try {
switch (state) {
case 0:
state = 1;
return fetch(`/users/${userId}`).then(step);
case 1:
response = value;
state = 2;
return response.json().then(step);
case 2:
user = value;
state = 3;
return fetch(`/posts?userId=${user.id}`).then(step);
// ... more states
}
} catch (e) {
reject(e);
}
}
step();
});
}
2. Event Loop (JavaScript):
Call Stack │ Task Queue │ Microtask Queue
──────────────┼───────────────┼──────────────────
main() │ │
↓ │ │
await fetch() │ │
↓ │ │
(suspends) │ fetch done ──┼──→ continuation
│ │
Event Loop picks up continuation
3. Continuation:
- await suspends function
- Promise resolves → continuation queued
- Event loop resumes function
Java (CompletableFuture):
CompletableFuture<User> fetchUser(int id) {
return CompletableFuture.supplyAsync(() -> {
return httpClient.get("/users/" + id);
});
}
// Chained like async/await
fetchUser(1)
.thenCompose(user -> fetchPosts(user.getId()))
.thenAccept(posts -> process(posts));
Key Points to Look For:
- Knows state machine transformation
- Understands non-blocking nature
- Can explain continuation concept
Follow-up: What's the difference between async/await and threads?
Event loop and non-blocking I/O
How does the event loop enable non-blocking I/O?
Traditional Blocking I/O:
Thread 1: read() ─────[BLOCKED]───── Data arrives → Continue
Thread 2: read() ─────[BLOCKED]───── Data arrives → Continue
Thread 3: read() ─────[BLOCKED]───── ...
Each connection = 1 thread
10,000 connections = 10,000 threads = Problem
Non-Blocking with Event Loop:
Single Thread:
┌──────────────────────────────────────────────┐
│ Event Loop │
│ ┌────────────────────────────────────┐ │
│ │ Event Queue │ │
│ │ [request1] [timer] [IO_complete] │ │
│ └──────────────┬─────────────────────┘ │
│ ↓ │
│ ┌──────────────────────────────────────┐ │
│ │ Process Event │ │
│ │ If I/O needed: │ │
│ │ Register callback, continue │ │
│ │ If computation: │ │
│ │ Execute, add result to queue │ │
│ └──────────────────────────────────────┘ │
└──────────────────────────────────────────────┘
JavaScript Event Loop:
console.log('1'); // Sync
setTimeout(() => {
console.log('2'); // Task Queue
}, 0);
Promise.resolve().then(() => {
console.log('3'); // Microtask Queue
});
console.log('4'); // Sync
// Output: 1, 4, 3, 2
Order of Execution:
1. Synchronous code
2. Microtasks (promises)
3. Tasks (setTimeout, I/O)
4. Render (browser)
5. Repeat
Node.js Event Loop Phases:
┌───────────────────────────┐
┌─→│ timers │ ← setTimeout, setInterval
│ └─────────────┬─────────────┘
│ ┌─────────────▼─────────────┐
│ │ pending callbacks │ ← I/O callbacks
│ └─────────────┬─────────────┘
│ ┌─────────────▼─────────────┐
│ │ idle, prepare │
│ └─────────────┬─────────────┘
│ ┌─────────────▼─────────────┐
│ │ poll │ ← New I/O events
│ └─────────────┬─────────────┘
│ ┌─────────────▼─────────────┐
│ │ check │ ← setImmediate
│ └─────────────┬─────────────┘
│ ┌─────────────▼─────────────┐
│ │ close callbacks │ ← socket.on('close')
└──┴───────────────────────────┘
Why It Works:
- Most time spent waiting for I/O
- Single thread does actual work
- OS handles I/O waiting (epoll, kqueue)
- Callback when I/O complete
Key Points to Look For:
- Understands non-blocking concept
- Knows event queue processing
- Distinguishes microtasks/tasks
Follow-up: What happens if you block the event loop?
Callbacks vs Promises vs Async/Await
Compare callbacks, promises, and async/await.
Callbacks:
function fetchData(callback) {
setTimeout(() => {
callback(null, 'data');
}, 1000);
}
fetchData((err, data) => {
if (err) {
console.error(err);
return;
}
console.log(data);
});
Callback Hell:
getUser(userId, (err, user) => {
getOrders(user.id, (err, orders) => {
getItems(orders[0].id, (err, items) => {
getDetails(items[0].id, (err, details) => {
// Pyramid of doom
});
});
});
});
Promises:
function fetchData() {
return new Promise((resolve, reject) => {
setTimeout(() => {
resolve('data');
}, 1000);
});
}
fetchData()
.then(data => console.log(data))
.catch(err => console.error(err));
// Chaining
getUser(userId)
.then(user => getOrders(user.id))
.then(orders => getItems(orders[0].id))
.then(items => getDetails(items[0].id))
.catch(err => console.error(err));
Async/Await:
async function fetchAllData(userId) {
try {
const user = await getUser(userId);
const orders = await getOrders(user.id);
const items = await getItems(orders[0].id);
const details = await getDetails(items[0].id);
return details;
} catch (err) {
console.error(err);
}
}
Comparison:
| Aspect | Callbacks | Promises | Async/Await |
|---|---|---|---|
| Readability | Poor (nesting) | Better (chaining) | Best (linear) |
| Error handling | Manual | .catch() | try/catch |
| Debugging | Difficult | Better | Best |
| Parallel | Manual | Promise.all | Promise.all |
| Return value | No | Yes | Yes |
Parallel Execution:
// Sequential
const a = await fetchA();
const b = await fetchB();
// Parallel
const [a, b] = await Promise.all([fetchA(), fetchB()]);
Error Handling:
// Promise
fetch('/api')
.then(handleSuccess)
.catch(handleError)
.finally(cleanup);
// Async/Await
try {
const result = await fetch('/api');
handleSuccess(result);
} catch (err) {
handleError(err);
} finally {
cleanup();
}
Key Points to Look For:
- Knows evolution and benefits
- Understands error handling
- Can write parallel execution
Follow-up: How do you handle errors in Promise.all?
Reactive programming basics
What is reactive programming? When would you use it?
Reactive Programming:
Programming with asynchronous data streams.
Observable Stream:
Timeline: ────────────────────────────────────→
Events: ──1──2──3────4──5────────6──|
└──────────────────────────┘
Observable
Basic Example (RxJS):
import { fromEvent, interval } from 'rxjs';
import { map, filter, debounceTime, switchMap } from 'rxjs/operators';
// Stream of click events
const clicks$ = fromEvent(document, 'click');
// Transform and filter
clicks$.pipe(
map(event => event.target),
filter(target => target.tagName === 'BUTTON')
).subscribe(button => console.log('Button clicked:', button));
// Search with debounce
const search$ = fromEvent(input, 'input').pipe(
map(e => e.target.value),
debounceTime(300),
distinctUntilChanged(),
switchMap(term => fetch(`/search?q=${term}`))
);
search$.subscribe(results => displayResults(results));
Key Concepts:
1. Observable:
Data source that emits values over time.
const observable = new Observable(subscriber => {
subscriber.next(1);
subscriber.next(2);
setTimeout(() => {
subscriber.next(3);
subscriber.complete();
}, 1000);
});
2. Operators:
Transform, filter, combine streams.
source$.pipe(
map(x => x * 2),
filter(x => x > 5),
take(3)
);
3. Subscription:
Connecting to the stream.
const subscription = observable.subscribe({
next: value => console.log(value),
error: err => console.error(err),
complete: () => console.log('Done')
});
// Cleanup
subscription.unsubscribe();
Common Operators:
- map, filter: Transform
- debounceTime, throttleTime: Rate limiting
- switchMap, mergeMap: Flatten nested observables
- combineLatest, merge: Combine streams
When to Use:
- Complex async logic
- Multiple event sources
- Real-time data
- UI events with debouncing
- Cancellable operations
Java (Project Reactor):
Flux.just(1, 2, 3)
.map(i -> i * 2)
.filter(i -> i > 2)
.subscribe(System.out::println);
Key Points to Look For:
- Understands streams concept
- Knows common operators
- Can identify use cases
Follow-up: What's the difference between switchMap and mergeMap?
Backpressure handling
What is backpressure and how do you handle it?
Backpressure:
When producer emits faster than consumer can process.
Producer: ──1──2──3──4──5──6──7──8──9──→ (Fast)
↓
Consumer: 1────2────3── (Slow)
└── Overflow!
Without Backpressure Handling:
// Producer overwhelms consumer
Flux.range(1, 1000000)
.subscribe(n -> {
Thread.sleep(100); // Slow consumer
process(n);
});
// OutOfMemoryError - events queued
Backpressure Strategies:
1. Buffering:
Flux.range(1, 1000000)
.onBackpressureBuffer(100) // Buffer 100 items
.subscribe(slowConsumer);
2. Dropping:
Flux.range(1, 1000000)
.onBackpressureDrop(dropped -> log("Dropped: " + dropped))
.subscribe(slowConsumer);
// Newest items dropped
3. Latest:
Flux.range(1, 1000000)
.onBackpressureLatest()
.subscribe(slowConsumer);
// Keep only latest item
4. Error:
Flux.range(1, 1000000)
.onBackpressureError()
.subscribe(slowConsumer);
// Throws error when overwhelmed
5. Request-based (Pull):
Flux.range(1, 1000000)
.subscribe(new BaseSubscriber<Integer>() {
@Override
protected void hookOnSubscribe(Subscription subscription) {
request(10); // Request 10 items
}
@Override
protected void hookOnNext(Integer value) {
process(value);
request(1); // Request 1 more
}
});
RxJS:
source$.pipe(
// Sample - take latest every interval
sampleTime(100),
// Throttle - first then wait
throttleTime(100),
// Debounce - wait for pause
debounceTime(100),
// Buffer - collect and emit batch
bufferTime(100)
);
Best Practices:
1. Always consider backpressure in reactive systems
2. Choose strategy based on data importance
3. Monitor buffer sizes
4. Consider batching
Key Points to Look For:
- Understands the problem
- Knows multiple strategies
- Can choose appropriate strategy
Follow-up: How does Kafka handle backpressure?
Concurrency Patterns
Producer-Consumer pattern
Implement the Producer-Consumer pattern.
Producer-Consumer:
Producers add items to a queue; consumers remove and process them.
Producers Queue Consumers
P1 ───┐ ┌─────────┐ ┌─── C1
P2 ───┼──→│●●●●●●●●●│───┼─── C2
P3 ───┘ └─────────┘ └─── C3
Using BlockingQueue:
class ProducerConsumer {
private final BlockingQueue<Integer> queue = new LinkedBlockingQueue<>(10);
class Producer implements Runnable {
@Override
public void run() {
try {
while (true) {
int item = produce();
queue.put(item); // Blocks if full
System.out.println("Produced: " + item);
}
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
}
class Consumer implements Runnable {
@Override
public void run() {
try {
while (true) {
int item = queue.take(); // Blocks if empty
consume(item);
System.out.println("Consumed: " + item);
}
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
}
}
Using wait/notify:
class BoundedBuffer {
private final Object[] buffer;
private int count, putIndex, takeIndex;
synchronized void put(Object item) throws InterruptedException {
while (count == buffer.length) {
wait(); // Buffer full, wait
}
buffer[putIndex] = item;
putIndex = (putIndex + 1) % buffer.length;
count++;
notifyAll(); // Wake consumers
}
synchronized Object take() throws InterruptedException {
while (count == 0) {
wait(); // Buffer empty, wait
}
Object item = buffer[takeIndex];
takeIndex = (takeIndex + 1) % buffer.length;
count--;
notifyAll(); // Wake producers
return item;
}
}
Benefits:
- Decouples producers from consumers
- Handles speed differences
- Enables parallel processing
Use Cases:
- Task queues
- Message processing
- Pipeline processing
- Load leveling
Key Points to Look For:
- Uses blocking queue
- Handles interruption
- Understands bounded vs unbounded
Follow-up: What happens with unbounded queue?
Thread-safe singleton implementation
Implement a thread-safe singleton.
1. Eager Initialization:
public class Singleton {
private static final Singleton INSTANCE = new Singleton();
private Singleton() {}
public static Singleton getInstance() {
return INSTANCE;
}
}
// Thread-safe: JVM handles class loading
// Con: Created even if not used
2. Lazy with Synchronized:
public class Singleton {
private static Singleton instance;
private Singleton() {}
public static synchronized Singleton getInstance() {
if (instance == null) {
instance = new Singleton();
}
return instance;
}
}
// Thread-safe but synchronized on every call
3. Double-Checked Locking:
public class Singleton {
private static volatile Singleton instance;
private Singleton() {}
public static Singleton getInstance() {
if (instance == null) { // First check (no lock)
synchronized (Singleton.class) {
if (instance == null) { // Second check (with lock)
instance = new Singleton();
}
}
}
return instance;
}
}
// volatile required for memory visibility
4. Initialization-on-Demand Holder (Recommended):
public class Singleton {
private Singleton() {}
private static class Holder {
static final Singleton INSTANCE = new Singleton();
}
public static Singleton getInstance() {
return Holder.INSTANCE;
}
}
// Lazy, thread-safe, no synchronization overhead
5. Enum Singleton (Best):
public enum Singleton {
INSTANCE;
public void doSomething() {
// ...
}
}
// Thread-safe, serialization-safe, reflection-proof
Comparison:
| Approach | Lazy | Thread-safe | Performance |
|---|---|---|---|
| Eager | No | Yes | Best |
| Synchronized | Yes | Yes | Worst |
| Double-check | Yes | Yes | Good |
| Holder | Yes | Yes | Best |
| Enum | No | Yes | Best |
Key Points to Look For:
- Knows multiple approaches
- Understands volatile in DCL
- Recommends enum or holder
Follow-up: Why is volatile needed in double-checked locking?
Concurrent collections and their uses
What concurrent collections are available and when do you use them?
Java Concurrent Collections:
1. ConcurrentHashMap:
ConcurrentHashMap<String, Integer> map = new ConcurrentHashMap<>();
// Thread-safe operations
map.put("key", 1);
map.get("key");
// Atomic compute
map.computeIfAbsent("key", k -> expensiveComputation());
map.merge("key", 1, Integer::sum); // Atomic increment
Use: High-concurrency maps, caches
2. CopyOnWriteArrayList:
CopyOnWriteArrayList<String> list = new CopyOnWriteArrayList<>();
// Iteration never throws ConcurrentModificationException
for (String item : list) {
list.add("new"); // Safe - iterates on snapshot
}
Use: Read-heavy, small lists (listeners)
3. BlockingQueue:
BlockingQueue<Task> queue = new LinkedBlockingQueue<>(100);
// Producer
queue.put(task); // Blocks if full
// Consumer
Task task = queue.take(); // Blocks if empty
Use: Producer-consumer, task queues
4. ConcurrentLinkedQueue:
ConcurrentLinkedQueue<String> queue = new ConcurrentLinkedQueue<>();
queue.offer("item");
String item = queue.poll();
Use: Non-blocking queues
5. ConcurrentSkipListMap/Set:
ConcurrentSkipListMap<String, Integer> sortedMap = new ConcurrentSkipListMap<>();
// Concurrent sorted map (like TreeMap but thread-safe)
Use: Concurrent sorted collections
Comparison:
| Collection | Thread-safe | Iteration | Performance |
|---|---|---|---|
| HashMap + sync | Yes (external) | Fail-fast | Moderate |
| Hashtable | Yes | Fail-fast | Poor |
| ConcurrentHashMap | Yes | Weakly consistent | Best |
| Collections.synchronizedMap | Yes (external) | Fail-fast | Moderate |
Choosing:
| Need | Collection |
|---|---|
| High-concurrency map | ConcurrentHashMap |
| Read-heavy list | CopyOnWriteArrayList |
| Task queue | BlockingQueue |
| Sorted concurrent | ConcurrentSkipListMap |
Key Points to Look For:
- Knows multiple collections
- Understands use cases
- Mentions iteration behavior
Follow-up: What does "weakly consistent" iteration mean?
Actor model explained
What is the actor model? How does it simplify concurrency?
Actor Model:
Concurrency model where actors are fundamental units that:
- Have private state
- Communicate via messages
- Process one message at a time
┌───────────┐ message ┌───────────┐
│ Actor A │──────────────→│ Actor B │
│ [state] │ │ [state] │
│ [mailbox]│←──────────────│ [mailbox]│
└───────────┘ message └───────────┘
Key Principles:
1. No shared state
2. Communication only via messages
3. Actors can create other actors
4. One message at a time (no data races)
Akka Example (Scala/Java):
class Counter extends Actor {
var count = 0 // Private state
def receive = {
case Increment => count += 1
case GetCount => sender() ! count
}
}
// Create and send messages
val counter = system.actorOf(Props[Counter])
counter ! Increment
counter ! Increment
counter ! GetCount // Response: 2
Benefits:
1. No locks: Messages processed sequentially
2. Isolation: Actor state is private
3. Distribution: Same model works across network
4. Fault tolerance: Supervision hierarchies
Supervision:
┌─────────────┐
│ Supervisor │
└──────┬──────┘
┌────┴────┐
↓ ↓
┌───────┐ ┌───────┐
│Actor 1│ │Actor 2│
└───────┘ └───────┘
Supervisor decides what to do when child fails:
- Resume (ignore failure)
- Restart (reset state)
- Stop (terminate)
- Escalate (pass to parent)
When to Use:
- Distributed systems
- High concurrency
- Fault tolerance needed
- Event-driven systems
Implementations:
- Akka (JVM)
- Erlang/OTP
- Microsoft Orleans
- Proto.Actor
Key Points to Look For:
- Knows core principles
- Understands message passing
- Mentions supervision
Follow-up: How does the actor model handle failures?
Fork-Join framework
What is the Fork-Join framework? How does it work?
Fork-Join:
Divide-and-conquer parallel execution framework.
Task
/ \
Fork Fork
↓ ↓
Subtask Subtask
↓ ↓
Result Result
\ /
Join
↓
Combined
Key Concepts:
1. ForkJoinPool:
ForkJoinPool pool = ForkJoinPool.commonPool();
// or
ForkJoinPool pool = new ForkJoinPool(4); // 4 threads
2. RecursiveTask (returns value):
class SumTask extends RecursiveTask<Long> {
private final long[] array;
private final int start, end;
private static final int THRESHOLD = 1000;
@Override
protected Long compute() {
if (end - start <= THRESHOLD) {
// Base case: compute directly
long sum = 0;
for (int i = start; i < end; i++) {
sum += array[i];
}
return sum;
}
// Recursive case: fork
int mid = (start + end) / 2;
SumTask left = new SumTask(array, start, mid);
SumTask right = new SumTask(array, mid, end);
left.fork(); // Execute asynchronously
long rightResult = right.compute(); // Execute synchronously
long leftResult = left.join(); // Wait for left
return leftResult + rightResult;
}
}
// Usage
ForkJoinPool pool = new ForkJoinPool();
long sum = pool.invoke(new SumTask(array, 0, array.length));
3. RecursiveAction (no return):
class SortTask extends RecursiveAction {
@Override
protected void compute() {
if (size < THRESHOLD) {
Arrays.sort(array, start, end);
} else {
int mid = (start + end) / 2;
invokeAll(
new SortTask(array, start, mid),
new SortTask(array, mid, end)
);
merge(array, start, mid, end);
}
}
}
Work Stealing:
Thread 1 queue: [T1, T2, T3]
Thread 2 queue: [] ← Steals T3 from Thread 1's tail
Thread 3 queue: [T4]
Idle threads steal from busy threads' queues
Parallel Streams (uses Fork-Join internally):
long sum = Arrays.stream(array)
.parallel()
.sum();
When to Use:
- Divide-and-conquer algorithms
- CPU-bound parallel tasks
- Large data processing
Key Points to Look For:
- Knows fork/join pattern
- Understands work stealing
- Knows threshold importance
Follow-up: Why is choosing the right threshold important?