Back to Questions
02

Data Structures & Algorithms

Arrays, trees, graphs, sorting, and complexity analysis

Difficulty:
Session: 0 asked
1.

Array vs LinkedList - time complexity comparison

Compare arrays and linked lists in terms of time complexity for common operations.

Junior

Time Complexity Comparison:

Operation Array LinkedList
Access by index O(1) O(n)
Search O(n) O(n)
Insert at beginning O(n) O(1)
Insert at end O(1)* O(1)**
Insert at middle O(n) O(n)***
Delete at beginning O(n) O(1)
Delete at end O(1) O(n)****

Amortized for dynamic arrays
If tail pointer maintained
O(1) if you have reference to node
**O(1) for doubly linked list

Memory:
- Array: Contiguous memory, better cache locality
- LinkedList: Non-contiguous, each node has pointer overhead

// Array - direct index access
int[] arr = {1, 2, 3, 4, 5};
int val = arr[2];  // O(1)

// LinkedList - must traverse
Node current = head;
for (int i = 0; i < index; i++) {
    current = current.next;  // O(n)
}

When to Use:

Use Array When Use LinkedList When
Random access needed Frequent insertions/deletions
Known or fixed size Unknown size, frequent resizing
Memory efficiency matters Insert/delete at ends
Cache performance important No random access needed

Key Points to Look For:
- Correct complexity for each operation
- Understanding of trade-offs
- Cache locality awareness

Follow-up: Why does cache locality make arrays faster in practice?

2.

Stack applications: when would you use a stack?

What are the common applications of a stack data structure?

Junior

A stack is LIFO (Last In, First Out). Key operations: push, pop, peek - all O(1).

Common Applications:

1. Function Call Stack

main() calls foo() calls bar()

Stack:
| bar()  | ← top
| foo()  |
| main() |

2. Expression Evaluation

// Postfix evaluation: 3 4 + 2 *
// Push 3, Push 4, Pop both, compute 7, Push
// Push 2, Pop both, compute 14

3. Parentheses Matching

public boolean isValid(String s) {
    Stack<Character> stack = new Stack<>();
    for (char c : s.toCharArray()) {
        if (c == '(') stack.push(')');
        else if (c == '[') stack.push(']');
        else if (c == '{') stack.push('}');
        else if (stack.isEmpty() || stack.pop() != c)
            return false;
    }
    return stack.isEmpty();
}

4. Undo/Redo Operations

Stack<Action> undoStack = new Stack<>();
Stack<Action> redoStack = new Stack<>();

void doAction(Action a) {
    a.execute();
    undoStack.push(a);
    redoStack.clear();
}

void undo() {
    Action a = undoStack.pop();
    a.reverse();
    redoStack.push(a);
}

5. Browser History (Back Button)

6. DFS Traversal (can use explicit stack instead of recursion)

7. Syntax Parsing (compilers)

Key Points to Look For:
- Multiple real-world examples
- Understanding of LIFO property
- Code example for at least one

Follow-up: How would you implement a stack that also returns minimum in O(1)?

3.

Queue types: FIFO, Priority Queue, Deque

Explain different types of queues and their use cases.

Junior

1. Regular Queue (FIFO)
First In, First Out.

Queue<Integer> queue = new LinkedList<>();
queue.offer(1);  // Add to back
queue.poll();    // Remove from front

Use: Task scheduling, BFS, print queues

2. Priority Queue (Heap)
Elements dequeued by priority, not arrival order.

PriorityQueue<Integer> minHeap = new PriorityQueue<>();
PriorityQueue<Integer> maxHeap = new PriorityQueue<>(Collections.reverseOrder());

minHeap.offer(5);
minHeap.offer(1);
minHeap.offer(3);
minHeap.poll();  // Returns 1 (smallest)

Use: Dijkstra's algorithm, task scheduling by priority, median finding

3. Deque (Double-Ended Queue)
Add/remove from both ends.

Deque<Integer> deque = new ArrayDeque<>();
deque.addFirst(1);   // Add to front
deque.addLast(2);    // Add to back
deque.pollFirst();   // Remove from front
deque.pollLast();    // Remove from back

Use: Sliding window problems, undo/redo, palindrome checking

4. Circular Queue
Fixed-size queue that wraps around.

class CircularQueue {
    int[] arr;
    int front, rear, size;

    void enqueue(int val) {
        rear = (rear + 1) % arr.length;
    }
}

Use: Buffer management, CPU scheduling

Complexity:

Operation Queue Priority Queue Deque
Insert O(1) O(log n) O(1)
Remove O(1) O(log n) O(1)
Peek O(1) O(1) O(1)

Key Points to Look For:
- Distinguishes between types
- Knows appropriate use cases
- Understands complexity differences

Follow-up: How would you implement a queue using two stacks?

4.

Binary Tree vs Binary Search Tree

What is the difference between a Binary Tree and a Binary Search Tree?

Junior

Binary Tree (BT):
Each node has at most 2 children. No ordering requirement.

     5
    / \
   8   3
  /
 1

Binary Search Tree (BST):
Binary tree WITH ordering: left < parent < right (for all nodes).

     5
    / \
   3   8
  /
 1

Key Difference: BST Property
For every node N:
- All nodes in left subtree < N
- All nodes in right subtree > N

Complexity:

Operation BT BST (balanced) BST (unbalanced)
Search O(n) O(log n) O(n)
Insert O(1)* O(log n) O(n)
Delete O(n) O(log n) O(n)

*Finding position is O(n)

BST Search:

Node search(Node root, int val) {
    if (root == null || root.val == val)
        return root;
    if (val < root.val)
        return search(root.left, val);
    return search(root.right, val);
}

Validating BST:

boolean isValidBST(Node root, long min, long max) {
    if (root == null) return true;
    if (root.val <= min || root.val >= max) return false;
    return isValidBST(root.left, min, root.val) &&
           isValidBST(root.right, root.val, max);
}

Key Points to Look For:
- Clear understanding of ordering property
- Knows complexity implications
- Can validate BST property

Follow-up: What happens to BST performance if you insert sorted data?

5.

Hash Table: how does hashing work? Collision handling

Explain how hash tables work and how collisions are handled.

Mid

How Hashing Works:
1. Key → Hash function → Hash code (integer)
2. Hash code → Bucket index (hash % array_size)
3. Store value at that bucket

int index = Math.abs(key.hashCode()) % buckets.length;
buckets[index] = value;

Collision: Two different keys map to same bucket.

Collision Handling:

1. Chaining (Separate Chaining)
Each bucket holds a linked list.

Bucket 0: → [A,1] → [D,4] → null
Bucket 1: → [B,2] → null
Bucket 2: → [C,3] → null
class HashTable {
    LinkedList<Entry>[] buckets;

    void put(K key, V value) {
        int index = hash(key);
        for (Entry e : buckets[index]) {
            if (e.key.equals(key)) {
                e.value = value;
                return;
            }
        }
        buckets[index].add(new Entry(key, value));
    }
}

2. Open Addressing (Linear Probing)
Find next empty slot.

void put(K key, V value) {
    int index = hash(key);
    while (buckets[index] != null) {
        index = (index + 1) % buckets.length;  // Linear probe
    }
    buckets[index] = new Entry(key, value);
}

Other probing: Quadratic probing, double hashing

Load Factor:

load_factor = n_elements / n_buckets
  • Too high → many collisions → resize
  • Java HashMap resizes at 0.75

Complexity:

Operation Average Worst (many collisions)
Search O(1) O(n)
Insert O(1) O(n)
Delete O(1) O(n)

Key Points to Look For:
- Understands hash function purpose
- Knows collision handling methods
- Understands load factor and resizing

Follow-up: What makes a good hash function?

6.

Heap: min-heap vs max-heap, heapify operation

Explain the difference between min-heap and max-heap. How does the heapify operation work?

Mid

Heap Property:
- Min-Heap: Parent ≤ children (root is minimum)
- Max-Heap: Parent ≥ children (root is maximum)

Structure: Complete binary tree (filled left to right)

Min-Heap:         Max-Heap:
    1                 9
   / \               / \
  3   2             7   8
 / \               / \
7   6             3   5

Array Representation:

Index:     0  1  2  3  4  5
Min-heap: [1, 3, 2, 7, 6, 5]

Parent of i: (i-1)/2
Left child of i: 2*i + 1
Right child of i: 2*i + 2

Heapify (Sift Down):
Restore heap property from a node downward.

void heapifyDown(int[] arr, int n, int i) {
    int smallest = i;
    int left = 2 * i + 1;
    int right = 2 * i + 2;

    if (left < n && arr[left] < arr[smallest])
        smallest = left;
    if (right < n && arr[right] < arr[smallest])
        smallest = right;

    if (smallest != i) {
        swap(arr, i, smallest);
        heapifyDown(arr, n, smallest);
    }
}

Build Heap - O(n):

void buildHeap(int[] arr) {
    int n = arr.length;
    // Start from last non-leaf node
    for (int i = n/2 - 1; i >= 0; i--) {
        heapifyDown(arr, n, i);
    }
}

Operations:

Operation Time
Find min/max O(1)
Insert O(log n)
Extract min/max O(log n)
Build heap O(n)

Key Points to Look For:
- Understands heap property
- Knows array representation
- Can explain heapify process

Follow-up: Why is building a heap O(n) and not O(n log n)?

7.

Graph representations: adjacency list vs matrix

Compare adjacency list and adjacency matrix for graph representation.

Mid

Graph Example:

    A --- B
    |     |
    C --- D

Adjacency Matrix:
2D array where matrix[i][j] = 1 if edge exists.

    A  B  C  D
A [ 0, 1, 1, 0 ]
B [ 1, 0, 0, 1 ]
C [ 1, 0, 0, 1 ]
D [ 0, 1, 1, 0 ]

Adjacency List:
Each vertex stores list of neighbors.

A: [B, C]
B: [A, D]
C: [A, D]
D: [B, C]

Comparison:

Aspect Matrix List
Space O(V²) O(V + E)
Check edge exists O(1) O(degree)
Find all neighbors O(V) O(degree)
Add edge O(1) O(1)
Add vertex O(V²) O(1)
Dense graphs Better Worse
Sparse graphs Worse Better

When to Use:

Matrix:
- Dense graphs (E ≈ V²)
- Frequent edge existence checks
- Weighted graphs (store weights)
- Small graphs

List:
- Sparse graphs (E << V²)
- Memory constrained
- Iterate over neighbors frequently
- Most real-world graphs

Implementation:

// Adjacency List
Map<Integer, List<Integer>> graph = new HashMap<>();
graph.put(0, Arrays.asList(1, 2));

// Adjacency Matrix
int[][] matrix = new int[V][V];
matrix[0][1] = 1;  // Edge from 0 to 1

Key Points to Look For:
- Knows space complexity difference
- Understands sparse vs dense
- Can implement both

Follow-up: How would you store weighted edges in each representation?

8.

Trie: use cases and implementation

What is a Trie? When would you use it and how is it implemented?

Mid

Trie (Prefix Tree):
Tree structure for storing strings where each node represents a character.

Structure:

Words: "cat", "car", "card", "dog"

        root
       /    \
      c      d
      |      |
      a      o
     / \     |
    t*  r*   g*
        |
        d*

* = end of word

Implementation:

class TrieNode {
    TrieNode[] children = new TrieNode[26];
    boolean isEndOfWord;
}

class Trie {
    TrieNode root = new TrieNode();

    void insert(String word) {
        TrieNode node = root;
        for (char c : word.toCharArray()) {
            int index = c - 'a';
            if (node.children[index] == null)
                node.children[index] = new TrieNode();
            node = node.children[index];
        }
        node.isEndOfWord = true;
    }

    boolean search(String word) {
        TrieNode node = findNode(word);
        return node != null && node.isEndOfWord;
    }

    boolean startsWith(String prefix) {
        return findNode(prefix) != null;
    }

    private TrieNode findNode(String str) {
        TrieNode node = root;
        for (char c : str.toCharArray()) {
            int index = c - 'a';
            if (node.children[index] == null) return null;
            node = node.children[index];
        }
        return node;
    }
}

Complexity:

Operation Time Space
Insert O(m) O(m)
Search O(m) O(1)
Prefix search O(m) O(1)

m = length of word

Use Cases:
1. Autocomplete - Find all words with prefix
2. Spell checker - Suggest corrections
3. IP routing - Longest prefix matching
4. Word games - Scrabble, Boggle
5. Search engines - Query suggestions

Key Points to Look For:
- Understands prefix-based search advantage
- Can implement basic operations
- Knows practical use cases

Follow-up: How would you implement autocomplete returning top 3 suggestions?

9.

Red-Black Tree vs AVL Tree

Compare Red-Black Trees and AVL Trees. When would you choose one over the other?

Senior

Both are self-balancing BSTs with O(log n) operations, but with different balance criteria.

AVL Tree:
- Balance Factor: Height difference between subtrees ≤ 1
- Strictly balanced: More rotations during insert/delete
- Height: ~1.44 log(n)

Red-Black Tree:
- Color property: Nodes are red or black
- Relaxed balance: Black height same on all paths
- Height: ~2 log(n)

Red-Black Rules:
1. Every node is red or black
2. Root is black
3. Leaves (NIL) are black
4. Red node's children are black
5. All paths have same black node count

Comparison:

Aspect AVL Red-Black
Balance Stricter Looser
Search Slightly faster Slightly slower
Insert/Delete More rotations Fewer rotations
Height 1.44 log n 2 log n
Memory Balance factor (int) Color bit
Use case Read-heavy Write-heavy

When to Use:

AVL Tree:
- Read-heavy workloads
- Faster lookups needed
- Database indexing where reads >> writes

Red-Black Tree:
- Write-heavy workloads
- Balanced read/write
- Used in: TreeMap, TreeSet (Java), std::map (C++)

Rotation Comparison:

AVL Insert: Up to O(log n) rotations
RB Insert: At most 2 rotations

AVL Delete: Up to O(log n) rotations
RB Delete: At most 3 rotations

Key Points to Look For:
- Understands balance criteria difference
- Knows performance trade-offs
- Can identify appropriate use cases

Follow-up: Why do language standard libraries prefer Red-Black trees?

10.

B-Tree and database indexing connection

What is a B-Tree and why is it used for database indexing?

Senior

B-Tree Properties:
- Self-balancing tree optimized for disk access
- Each node can have multiple keys and children
- All leaves at same level
- Minimizes disk I/O operations

Structure (B-Tree of order m):
- Each node has at most m children
- Each node (except root) has at least ⌈m/2⌉ children
- Root has at least 2 children (if not leaf)
- All leaves at same depth

         [30 | 60]
        /    |    \
   [10|20] [40|50] [70|80|90]

Why B-Trees for Databases:

1. Disk I/O Optimization

Disk read = ~10ms
Memory access = ~100ns
Ratio: 100,000x slower

B-Tree minimizes disk reads by:
- Storing many keys per node
- Keeping tree shallow

2. Node Size = Disk Block

Typical disk block: 4KB-16KB
B-Tree node: Fits one block
One I/O = one node read

3. Height Comparison (1 million records):

Binary tree: ~20 levels = 20 disk reads
B-Tree (order 100): ~3 levels = 3 disk reads

B+ Tree (Used in Most DBs):
- All data in leaf nodes
- Internal nodes only have keys
- Leaves linked for range queries

Internal:    [30 | 60]
            /    |    \
Leaves: [10→20→] [30→40→50→] [60→70→80→]
           ↓        ↓          ↓
         Data     Data       Data

Complexity:

Operation B-Tree
Search O(log n)
Insert O(log n)
Delete O(log n)
Range query O(log n + k)

Key Points to Look For:
- Understands disk I/O motivation
- Knows difference from binary trees
- Familiar with B+ tree variant

Follow-up: How do database indexes handle updates efficiently?


Algorithms

11.

Sorting: Compare QuickSort, MergeSort, HeapSort

Compare QuickSort, MergeSort, and HeapSort in terms of complexity and use cases.

Mid

Complexity Comparison:

Algorithm Best Average Worst Space Stable
QuickSort O(n log n) O(n log n) O(n²) O(log n) No
MergeSort O(n log n) O(n log n) O(n log n) O(n) Yes
HeapSort O(n log n) O(n log n) O(n log n) O(1) No

QuickSort:

void quickSort(int[] arr, int low, int high) {
    if (low < high) {
        int pi = partition(arr, low, high);
        quickSort(arr, low, pi - 1);
        quickSort(arr, pi + 1, high);
    }
}
  • Pro: Fastest in practice (cache-friendly)
  • Con: O(n²) worst case (bad pivot)
  • Use: General purpose, in-place sorting

MergeSort:

void mergeSort(int[] arr, int l, int r) {
    if (l < r) {
        int m = (l + r) / 2;
        mergeSort(arr, l, m);
        mergeSort(arr, m + 1, r);
        merge(arr, l, m, r);
    }
}
  • Pro: Guaranteed O(n log n), stable
  • Con: O(n) extra space
  • Use: External sorting, linked lists, stability needed

HeapSort:

void heapSort(int[] arr) {
    buildMaxHeap(arr);
    for (int i = n-1; i > 0; i--) {
        swap(arr, 0, i);
        heapify(arr, i, 0);
    }
}
  • Pro: O(1) space, guaranteed O(n log n)
  • Con: Not stable, poor cache performance
  • Use: Memory constrained, worst-case guarantee needed

When to Use:

Scenario Best Choice
General purpose QuickSort
Stability needed MergeSort
Memory constrained HeapSort
Linked lists MergeSort
Nearly sorted InsertionSort
Small arrays InsertionSort

Key Points to Look For:
- Knows all three complexities
- Understands practical trade-offs
- Can explain stability concept

Follow-up: Why is QuickSort faster in practice despite same complexity?

12.

When is O(n²) acceptable over O(n log n)?

Are there situations where an O(n²) algorithm might be preferable to an O(n log n) algorithm?

Mid

Yes, Big-O ignores constants and lower-order terms that matter in practice.

When O(n²) Can Be Better:

1. Small Input Size

O(n log n) with constant 100: 100 * n * log(n)
O(n²) with constant 1: n²

For n = 10:
- 100 * 10 * 3.3 = 3300
- 10² = 100

O(n²) is 33x faster!

2. Low Constant Factors
InsertionSort vs MergeSort:

// InsertionSort: Simple operations, great cache locality
for (int i = 1; i < n; i++) {
    int key = arr[i];
    int j = i - 1;
    while (j >= 0 && arr[j] > key) {
        arr[j + 1] = arr[j];
        j--;
    }
    arr[j + 1] = key;
}

3. Nearly Sorted Data
InsertionSort on nearly sorted: O(n)
QuickSort on nearly sorted: O(n²)

4. Space Constraints

SelectionSort: O(1) space, O(n²) time
MergeSort: O(n) space, O(n log n) time

If memory is precious, accept slower time.

5. Simplicity/Maintenance

// Simple O(n²) - easy to understand and debug
for (int i = 0; i < n; i++) {
    for (int j = i + 1; j < n; j++) {
        if (arr[i] > arr[j]) swap(i, j);
    }
}

Hybrid Approaches:
- TimSort: Uses InsertionSort for small runs
- IntroSort: QuickSort + HeapSort fallback

Key Points to Look For:
- Understands constants matter
- Knows about hybrid algorithms
- Practical judgment over theoretical

Follow-up: What is TimSort and why does Python/Java use it?

13.

Binary Search variations and edge cases

Explain binary search and common variations/edge cases to handle.

Junior

Standard Binary Search:

int binarySearch(int[] arr, int target) {
    int left = 0, right = arr.length - 1;
    while (left <= right) {
        int mid = left + (right - left) / 2;  // Avoid overflow
        if (arr[mid] == target) return mid;
        if (arr[mid] < target) left = mid + 1;
        else right = mid - 1;
    }
    return -1;  // Not found
}

Common Variations:

1. Find First Occurrence:

int findFirst(int[] arr, int target) {
    int left = 0, right = arr.length - 1, result = -1;
    while (left <= right) {
        int mid = left + (right - left) / 2;
        if (arr[mid] == target) {
            result = mid;
            right = mid - 1;  // Continue searching left
        } else if (arr[mid] < target) left = mid + 1;
        else right = mid - 1;
    }
    return result;
}

2. Find Last Occurrence:

// Same but: left = mid + 1 when found

3. Find Insert Position:

int searchInsert(int[] arr, int target) {
    int left = 0, right = arr.length - 1;
    while (left <= right) {
        int mid = left + (right - left) / 2;
        if (arr[mid] < target) left = mid + 1;
        else right = mid - 1;
    }
    return left;  // Insert position
}

Edge Cases:
1. Empty array: Check length first
2. Single element: Works correctly
3. Target not present: Return -1 or insert position
4. Duplicates: Use first/last occurrence variant
5. Integer overflow: Use left + (right - left) / 2
6. All same elements: Handle correctly

Key Points to Look For:
- Correct boundary conditions (left <= right)
- Overflow prevention
- Handles edge cases

Follow-up: How do you binary search a rotated sorted array?

14.

BFS vs DFS - use cases and implementation

Compare BFS and DFS. When would you use each?

Mid

BFS (Breadth-First Search):
Explores level by level using a queue.

void bfs(Node start) {
    Queue<Node> queue = new LinkedList<>();
    Set<Node> visited = new HashSet<>();

    queue.offer(start);
    visited.add(start);

    while (!queue.isEmpty()) {
        Node node = queue.poll();
        process(node);

        for (Node neighbor : node.neighbors) {
            if (!visited.contains(neighbor)) {
                visited.add(neighbor);
                queue.offer(neighbor);
            }
        }
    }
}

DFS (Depth-First Search):
Explores as deep as possible using stack/recursion.

void dfs(Node node, Set<Node> visited) {
    if (visited.contains(node)) return;
    visited.add(node);
    process(node);

    for (Node neighbor : node.neighbors) {
        dfs(neighbor, visited);
    }
}

Comparison:

Aspect BFS DFS
Data structure Queue Stack/Recursion
Explores Level by level Branch by branch
Space O(width) O(depth)
Shortest path Yes (unweighted) No
Completeness Yes Yes (finite graphs)

When to Use:

BFS:
- Shortest path in unweighted graphs
- Level-order traversal
- Finding nearest (closest match)
- Social networks (degrees of separation)
- Web crawlers (limited depth)

DFS:
- Cycle detection
- Topological sorting
- Path finding (any path, not shortest)
- Maze solving
- Tree traversals (preorder, inorder, postorder)
- Connected components

Example Problem Selection:

"Find shortest path": BFS
"Check if path exists": Either (DFS simpler)
"Find all paths": DFS with backtracking
"Level-order print": BFS
"Detect cycle": DFS

Key Points to Look For:
- Correct implementations
- Understands space complexity difference
- Can choose appropriate algorithm

Follow-up: How do you detect a cycle using DFS?

15.

Dijkstra's vs Bellman-Ford for shortest path

When would you use Dijkstra's algorithm vs Bellman-Ford?

Senior

Both find shortest paths, but with different constraints and performance.

Dijkstra's Algorithm:

void dijkstra(Graph g, int src) {
    int[] dist = new int[V];
    Arrays.fill(dist, Integer.MAX_VALUE);
    dist[src] = 0;

    PriorityQueue<int[]> pq = new PriorityQueue<>((a,b) -> a[1] - b[1]);
    pq.offer(new int[]{src, 0});

    while (!pq.isEmpty()) {
        int[] curr = pq.poll();
        int u = curr[0];

        for (int[] edge : g.adj[u]) {
            int v = edge[0], weight = edge[1];
            if (dist[u] + weight < dist[v]) {
                dist[v] = dist[u] + weight;
                pq.offer(new int[]{v, dist[v]});
            }
        }
    }
}
  • Time: O((V + E) log V) with binary heap
  • Limitation: Non-negative weights only

Bellman-Ford Algorithm:

void bellmanFord(Graph g, int src) {
    int[] dist = new int[V];
    Arrays.fill(dist, Integer.MAX_VALUE);
    dist[src] = 0;

    // Relax all edges V-1 times
    for (int i = 0; i < V - 1; i++) {
        for (Edge e : edges) {
            if (dist[e.src] + e.weight < dist[e.dst]) {
                dist[e.dst] = dist[e.src] + e.weight;
            }
        }
    }

    // Check for negative cycles
    for (Edge e : edges) {
        if (dist[e.src] + e.weight < dist[e.dst]) {
            System.out.println("Negative cycle detected!");
        }
    }
}
  • Time: O(V × E)
  • Handles: Negative weights

Comparison:

Aspect Dijkstra Bellman-Ford
Time O((V+E) log V) O(V × E)
Negative weights No Yes
Negative cycles Can't detect Can detect
Space O(V) O(V)
Best for Dense, non-negative Negative weights

When to Use:

Dijkstra:
- Non-negative edge weights
- Need fastest algorithm
- GPS navigation, network routing

Bellman-Ford:
- Negative edge weights possible
- Need to detect negative cycles
- Currency arbitrage detection
- Distributed systems (simpler to implement)

Key Points to Look For:
- Knows negative weight handling difference
- Understands complexity trade-off
- Can identify negative cycle use case

Follow-up: What algorithm would you use for all-pairs shortest path?

16.

Dynamic Programming: top-down vs bottom-up

Explain the difference between top-down and bottom-up dynamic programming.

Mid

Dynamic Programming: Solve complex problems by breaking into overlapping subproblems.

Example: Fibonacci

Top-Down (Memoization):
Start with original problem, recurse down, cache results.

int[] memo = new int[n + 1];

int fib(int n) {
    if (n <= 1) return n;
    if (memo[n] != 0) return memo[n];

    memo[n] = fib(n - 1) + fib(n - 2);
    return memo[n];
}

Bottom-Up (Tabulation):
Start with smallest subproblems, build up to answer.

int fib(int n) {
    if (n <= 1) return n;

    int[] dp = new int[n + 1];
    dp[0] = 0;
    dp[1] = 1;

    for (int i = 2; i <= n; i++) {
        dp[i] = dp[i-1] + dp[i-2];
    }
    return dp[n];
}

Space-Optimized Bottom-Up:

int fib(int n) {
    if (n <= 1) return n;
    int prev2 = 0, prev1 = 1;

    for (int i = 2; i <= n; i++) {
        int curr = prev1 + prev2;
        prev2 = prev1;
        prev1 = curr;
    }
    return prev1;
}

Comparison:

Aspect Top-Down Bottom-Up
Approach Recursive + memo Iterative + table
Subproblems solved Only needed ones All subproblems
Stack space O(n) recursion O(1)
Code style Often more intuitive Often more efficient
Space optimization Harder Easier

When to Use:

Top-Down:
- Problem naturally recursive
- Not all subproblems needed
- Easier to think about

Bottom-Up:
- Need space optimization
- Iterative preferred
- Clear subproblem order

Key Points to Look For:
- Can implement both approaches
- Understands trade-offs
- Knows when each is preferred

Follow-up: How do you identify overlapping subproblems?

17.

Greedy algorithms: when do they work?

When can you use a greedy algorithm? How do you know if greedy will give optimal solution?

Mid

Greedy Algorithm: Make locally optimal choice at each step, hoping for global optimum.

When Greedy Works:
1. Greedy Choice Property: Local optimal leads to global optimal
2. Optimal Substructure: Optimal solution contains optimal solutions to subproblems

Classic Greedy Problems:

1. Activity Selection ✓ Greedy works

// Select max non-overlapping activities
// Greedy: Always pick earliest ending activity
Arrays.sort(activities, (a, b) -> a.end - b.end);

2. Fractional Knapsack ✓ Greedy works

// Take items by value/weight ratio
Arrays.sort(items, (a, b) -> b.ratio - a.ratio);

3. Huffman Coding ✓ Greedy works

4. Dijkstra's Shortest Path ✓ Greedy works (non-negative weights)

When Greedy Fails:

0/1 Knapsack ✗ Greedy fails

Items: [(60, 10), (100, 20), (120, 30)]
Capacity: 50

Greedy (by value/weight): Takes item 1 (ratio 6), then item 2 (ratio 5)
Total: 160

Optimal: Items 2 and 3
Total: 220

Coin Change (general) ✗ Greedy can fail

Coins: [1, 3, 4], Amount: 6
Greedy: 4 + 1 + 1 = 3 coins
Optimal: 3 + 3 = 2 coins

How to Verify:
1. Exchange argument: Show swapping greedy choice doesn't improve
2. Proof by contradiction: Assume optimal differs, show contradiction
3. Counterexample: Find case where greedy fails

Key Points to Look For:
- Understands greedy choice property
- Can identify when greedy fails
- Knows classic greedy problems

Follow-up: How would you solve 0/1 Knapsack optimally?

18.

Recursion vs iteration - stack implications

What are the trade-offs between recursion and iteration? What are the stack implications?

Junior

Recursion:
Function calls itself, uses call stack.

int factorial(int n) {
    if (n <= 1) return 1;
    return n * factorial(n - 1);
}

Iteration:
Uses loops, explicit state management.

int factorial(int n) {
    int result = 1;
    for (int i = 2; i <= n; i++) {
        result *= i;
    }
    return result;
}

Stack Implications:

Recursion:

factorial(5)
    factorial(4)
        factorial(3)
            factorial(2)
                factorial(1)

Call stack depth = n
Each frame stores: return address, local variables, parameters

Stack Overflow Risk:

// This will cause StackOverflowError
factorial(100000);  // Too deep!

// Default stack size: ~1MB
// Each frame: ~100-200 bytes
// Max depth: ~5000-10000 calls

Comparison:

Aspect Recursion Iteration
Memory O(n) stack O(1)
Readability Often clearer Can be complex
Performance Function call overhead Faster
Stack overflow Yes, for deep recursion No
Debugging Harder (stack traces) Easier

Tail Recursion:
Special case where recursive call is last operation.

// Tail recursive
int factorial(int n, int acc) {
    if (n <= 1) return acc;
    return factorial(n - 1, n * acc);  // Last operation
}

Some languages optimize this to iteration (Java doesn't).

When to Use:

Recursion:
- Tree/graph traversals
- Divide and conquer
- Problem naturally recursive
- Depth is bounded

Iteration:
- Performance critical
- Simple loops
- Deep recursion needed

Key Points to Look For:
- Understands stack frame concept
- Knows overflow risk
- Can convert between approaches

Follow-up: How would you convert a recursive DFS to iterative?

19.

Two-pointer technique applications

Explain the two-pointer technique and common applications.

Mid

Two-Pointer Technique:
Use two pointers to traverse data structure, reducing time complexity from O(n²) to O(n).

Pattern 1: Opposite Ends
Start from both ends, move toward middle.

// Two Sum on sorted array
int[] twoSum(int[] arr, int target) {
    int left = 0, right = arr.length - 1;
    while (left < right) {
        int sum = arr[left] + arr[right];
        if (sum == target) return new int[]{left, right};
        if (sum < target) left++;
        else right--;
    }
    return new int[]{-1, -1};
}

Pattern 2: Same Direction (Fast/Slow)
Both pointers start at same end.

// Remove duplicates in-place
int removeDuplicates(int[] arr) {
    int slow = 0;
    for (int fast = 1; fast < arr.length; fast++) {
        if (arr[fast] != arr[slow]) {
            slow++;
            arr[slow] = arr[fast];
        }
    }
    return slow + 1;
}

Pattern 3: Cycle Detection (Floyd's)

boolean hasCycle(ListNode head) {
    ListNode slow = head, fast = head;
    while (fast != null && fast.next != null) {
        slow = slow.next;
        fast = fast.next.next;
        if (slow == fast) return true;
    }
    return false;
}

Common Applications:
1. Two Sum (sorted array)
2. Three Sum (fix one, two-pointer on rest)
3. Container with Most Water
4. Trapping Rain Water
5. Palindrome check
6. Remove duplicates
7. Merge sorted arrays
8. Linked list cycle detection
9. Find middle of linked list

Key Insight:
When sorted array or comparing pairs, consider two pointers before hash map.

Key Points to Look For:
- Knows multiple patterns
- Can identify when applicable
- Understands complexity improvement

Follow-up: How would you solve Three Sum using two pointers?

20.

Sliding window pattern explained

Explain the sliding window technique. When and how do you use it?

Mid

Sliding Window:
Maintain a "window" over data, slide it to examine all subarrays/substrings of certain size or property.

Fixed Size Window:

// Max sum of subarray of size k
int maxSum(int[] arr, int k) {
    int windowSum = 0, maxSum = 0;

    // Initial window
    for (int i = 0; i < k; i++) {
        windowSum += arr[i];
    }
    maxSum = windowSum;

    // Slide window
    for (int i = k; i < arr.length; i++) {
        windowSum += arr[i] - arr[i - k];  // Add new, remove old
        maxSum = Math.max(maxSum, windowSum);
    }
    return maxSum;
}

Variable Size Window:

// Smallest subarray with sum >= target
int minSubArrayLen(int target, int[] arr) {
    int left = 0, sum = 0, minLen = Integer.MAX_VALUE;

    for (int right = 0; right < arr.length; right++) {
        sum += arr[right];  // Expand window

        while (sum >= target) {  // Contract window
            minLen = Math.min(minLen, right - left + 1);
            sum -= arr[left];
            left++;
        }
    }
    return minLen == Integer.MAX_VALUE ? 0 : minLen;
}

String Pattern:

// Longest substring without repeating characters
int lengthOfLongestSubstring(String s) {
    Set<Character> window = new HashSet<>();
    int left = 0, maxLen = 0;

    for (int right = 0; right < s.length(); right++) {
        while (window.contains(s.charAt(right))) {
            window.remove(s.charAt(left));
            left++;
        }
        window.add(s.charAt(right));
        maxLen = Math.max(maxLen, right - left + 1);
    }
    return maxLen;
}

When to Use:
- Contiguous subarray/substring problems
- "Find min/max subarray of size k"
- "Longest substring with condition"
- "Smallest subarray with sum >= X"

Template:

int left = 0;
for (int right = 0; right < n; right++) {
    // Expand: add arr[right] to window

    while (window is invalid) {
        // Contract: remove arr[left] from window
        left++;
    }

    // Update answer
}

Key Points to Look For:
- Distinguishes fixed vs variable window
- Knows when technique applies
- Can implement both patterns

Follow-up: How would you find all anagrams in a string?


Complexity Analysis

21.

Big O, Big Omega, Big Theta - differences

What is the difference between Big O, Big Omega, and Big Theta notation?

Junior

Big O (O): Upper bound - worst case
"At most" / "grows no faster than"

f(n) = O(g(n)) means f(n) ≤ c·g(n) for large n

Example: Linear search is O(n)
- Never worse than n comparisons

Big Omega (Ω): Lower bound - best case
"At least" / "grows no slower than"

f(n) = Ω(g(n)) means f(n) ≥ c·g(n) for large n

Example: Comparison sort is Ω(n log n)
- Can't do better than n log n comparisons

Big Theta (Θ): Tight bound - average case
"Exactly" / "grows at same rate"

f(n) = Θ(g(n)) means c₁·g(n) ≤ f(n) ≤ c₂·g(n)

Example: MergeSort is Θ(n log n)
- Always n log n (best, average, worst)

Visual:

        Big O (upper bound)
       /
      f(n)
       \
        Big Omega (lower bound)

If both meet: Big Theta (tight bound)

Examples:

Algorithm O (worst) Ω (best) Θ (tight)
Linear search O(n) Ω(1) -
Binary search O(log n) Ω(1) -
MergeSort O(n log n) Ω(n log n) Θ(n log n)
QuickSort O(n²) Ω(n log n) -

Common Misconception:
"Big O = worst case" is oversimplified.
Big O can describe any bound (best, average, worst).
We usually use Big O for worst case by convention.

Key Points to Look For:
- Knows all three notations
- Understands upper/lower/tight bound
- Can give examples

Follow-up: Why do we typically focus on Big O in interviews?

22.

How to analyze nested loops?

How do you analyze the time complexity of nested loops?

Junior

Rule: Multiply the complexities of each nested loop.

Basic Nested Loops:

for (int i = 0; i < n; i++) {       // n times
    for (int j = 0; j < n; j++) {   // n times each
        // O(1) work
    }
}
// Total: O(n × n) = O(n²)

Dependent Inner Loop:

for (int i = 0; i < n; i++) {
    for (int j = 0; j < i; j++) {  // Depends on i
        // O(1) work
    }
}
// 0 + 1 + 2 + ... + (n-1) = n(n-1)/2 = O(n²)

Different Bounds:

for (int i = 0; i < n; i++) {       // n times
    for (int j = 0; j < m; j++) {   // m times
        // O(1) work
    }
}
// Total: O(n × m)

Logarithmic Inner Loop:

for (int i = 0; i < n; i++) {       // n times
    for (int j = 1; j < n; j *= 2) { // log n times
        // O(1) work
    }
}
// Total: O(n log n)

Decreasing Inner Loop:

for (int i = n; i > 0; i /= 2) {    // log n times
    for (int j = 0; j < i; j++) {   // i times
        // O(1) work
    }
}
// n + n/2 + n/4 + ... + 1 = 2n = O(n)

Common Patterns:

Pattern Complexity
n × n O(n²)
n × log n O(n log n)
n × m O(nm)
0+1+...+(n-1) O(n²)
n + n/2 + n/4 + ... O(n)

Key Points to Look For:
- Correctly multiplies independent loops
- Handles dependent loops (sum series)
- Recognizes common patterns

Follow-up: What's the complexity of triple nested loops iterating to n?

23.

Space complexity of recursive algorithms

How do you analyze the space complexity of recursive algorithms?

Mid

Space Complexity Components:
1. Call Stack: O(max recursion depth)
2. Auxiliary Space: Additional data structures

Simple Recursion:

int factorial(int n) {
    if (n <= 1) return 1;
    return n * factorial(n - 1);
}
// Space: O(n) - n stack frames

Tail Recursion:

int factorial(int n, int acc) {
    if (n <= 1) return acc;
    return factorial(n - 1, n * acc);
}
// Space: O(n) in Java (no TCO)
// Space: O(1) in languages with tail call optimization

Binary Recursion:

int fib(int n) {
    if (n <= 1) return n;
    return fib(n-1) + fib(n-2);
}
// Space: O(n) - max depth is n (not 2^n!)

Tree Traversal:

void inorder(Node root) {
    if (root == null) return;
    inorder(root.left);
    visit(root);
    inorder(root.right);
}
// Space: O(h) where h = tree height
// Balanced: O(log n)
// Skewed: O(n)

Merge Sort:

void mergeSort(int[] arr, int l, int r) {
    if (l < r) {
        int m = (l + r) / 2;
        mergeSort(arr, l, m);
        mergeSort(arr, m + 1, r);
        merge(arr, l, m, r);  // Uses O(n) temp array
    }
}
// Stack: O(log n)
// Merge temp array: O(n)
// Total: O(n)

Key Insight:

Space = max(stack depth × frame size, auxiliary space)

Common Examples:

Algorithm Stack Depth Aux Space Total
Factorial O(n) O(1) O(n)
Fibonacci O(n) O(1) O(n)
Binary search O(log n) O(1) O(log n)
Merge sort O(log n) O(n) O(n)
Quick sort O(log n)* O(1) O(log n)

*With good pivot selection

Key Points to Look For:
- Understands stack depth matters
- Can analyze tree-shaped recursion
- Considers auxiliary space separately

Follow-up: How does stack depth of DFS differ for trees vs graphs?

24.

Amortized analysis explained

What is amortized analysis? Give an example.

Senior

Amortized Analysis:
Average time per operation over a sequence of operations, accounting for expensive operations being rare.

Not the same as average case!
- Average case: probability of different inputs
- Amortized: sequence of operations on same structure

Example: Dynamic Array (ArrayList)

class DynamicArray {
    int[] arr;
    int size = 0;

    void add(int val) {
        if (size == arr.length) {
            // Resize: O(n)
            int[] newArr = new int[arr.length * 2];
            System.arraycopy(arr, 0, newArr, 0, size);
            arr = newArr;
        }
        arr[size++] = val;  // O(1)
    }
}

Analysis:
- Most insertions: O(1)
- Occasional resize: O(n)
- Pattern: 1, 1, 1, ..., 1, n, 1, 1, ..., 1, n, ...

Aggregate Method:

Insert n elements:
- n insertions = n operations
- Resizes at sizes 1, 2, 4, 8, ..., n
- Resize cost: 1 + 2 + 4 + 8 + ... + n = 2n - 1

Total: n + 2n = 3n
Amortized: 3n / n = O(1) per operation

Accounting Method:
Charge $3 per insertion:
- $1 for the insertion
- $2 saved for future resize (moving self + one earlier element)

Every resize is "pre-paid" by earlier insertions.

Potential Method:
Define potential function Φ = 2×size - capacity
Show: actual_cost + ΔΦ = amortized_cost

Other Examples:
- Hash table resizing: O(1) amortized insert
- Splay tree: O(log n) amortized operations
- Union-Find with path compression: O(α(n)) amortized

Key Points to Look For:
- Distinguishes from average case
- Can explain with dynamic array
- Knows amortized ≠ worst case

Follow-up: What's the amortized complexity of push/pop for a stack?

25.

Best/worst/average case scenarios

Explain best case, worst case, and average case complexity with examples.

Junior

Best Case: Minimum operations on most favorable input
Worst Case: Maximum operations on least favorable input
Average Case: Expected operations over all possible inputs

Example: Linear Search

int linearSearch(int[] arr, int target) {
    for (int i = 0; i < arr.length; i++) {
        if (arr[i] == target) return i;
    }
    return -1;
}
Case Scenario Complexity
Best Target at index 0 O(1)
Worst Target not present or at end O(n)
Average Target at random position O(n/2) = O(n)

Example: QuickSort

void quickSort(int[] arr, int low, int high) {
    if (low < high) {
        int pi = partition(arr, low, high);
        quickSort(arr, low, pi - 1);
        quickSort(arr, pi + 1, high);
    }
}
Case Scenario Complexity
Best Pivot always median O(n log n)
Worst Pivot always min/max (sorted input) O(n²)
Average Random pivot position O(n log n)

Example: Binary Search Tree

Case Tree Shape Complexity
Best Balanced tree O(log n)
Worst Skewed (linked list) O(n)
Average Random insertions O(log n)

Why Focus on Worst Case:
1. Guarantees: Know the upper bound
2. Security: Attackers may craft worst-case input
3. Real-time systems: Must meet deadlines

When Average Matters:
1. Random inputs guaranteed
2. Amortized analysis (many operations)
3. Comparison between algorithms with same worst case

Key Points to Look For:
- Concrete examples for each case
- Knows when to consider which
- Understands practical implications

Follow-up: Why does QuickSort perform well despite O(n²) worst case?