Back to Questions
10

Professional Practices

Agile, code review, CI/CD, and collaboration

Difficulty:
Session: 0 asked
1.

Agile principles vs practices

What's the difference between Agile principles and practices?

Junior

Agile Principles (from Manifesto):
Core values that guide decision-making.

1. Individuals and interactions over processes and tools
2. Working software over comprehensive documentation
3. Customer collaboration over contract negotiation
4. Responding to change over following a plan

12 Principles:
1. Satisfy customer through early and continuous delivery
2. Welcome changing requirements
3. Deliver working software frequently
4. Business and developers work together daily
5. Build projects around motivated individuals
6. Face-to-face conversation is best
7. Working software is primary measure of progress
8. Sustainable development pace
9. Continuous attention to technical excellence
10. Simplicity—maximize work not done
11. Self-organizing teams
12. Regular reflection and adjustment

Agile Practices:
Specific implementations of principles.

Principle Practice
Deliver frequently 2-week sprints, CI/CD
Welcome change Backlog grooming, sprint planning
Working software Definition of Done, demos
Collaboration Daily standups, pair programming
Technical excellence TDD, code review, refactoring
Reflection Retrospectives

Principles vs Practices:

Principle: "Respond to change over following a plan"
Practice: User stories can be reprioritized each sprint

Principle: "Working software is primary measure"
Practice: Sprint demos of completed features

Principle: "Continuous attention to technical excellence"
Practice: Code review, automated testing, refactoring

Common Mistake:
Following practices without understanding principles.

"We do standups because Scrum says so"
vs
"We do standups because frequent communication
helps us adapt to changes quickly"

Key Points to Look For:
- Knows principles guide practices
- Can connect practice to principle
- Values principles over rituals

Follow-up: What happens when teams follow practices without understanding principles?

2.

Scrum ceremonies and roles

What are the Scrum ceremonies and roles?

Junior

Scrum Roles:

1. Product Owner:
- Owns product backlog
- Prioritizes features
- Represents stakeholders
- Accepts/rejects work

2. Scrum Master:
- Facilitates ceremonies
- Removes impediments
- Coaches team on Scrum
- Shields team from distractions

3. Development Team:
- Cross-functional (dev, test, design)
- Self-organizing
- Delivers increment each sprint
- Typically 3-9 people

Scrum Ceremonies:

1. Sprint Planning:

When: Start of sprint
Duration: 2-4 hours (2-week sprint)
Who: Whole team
Output: Sprint goal, sprint backlog

"What can we commit to this sprint?"
"How will we accomplish the work?"

2. Daily Standup:

When: Daily, same time
Duration: 15 minutes max
Who: Development team (others observe)

Each person answers:
- What did I do yesterday?
- What will I do today?
- Any blockers?

3. Sprint Review (Demo):

When: End of sprint
Duration: 1-2 hours
Who: Team + stakeholders

- Demo completed work
- Gather feedback
- Update backlog based on feedback

4. Sprint Retrospective:

When: End of sprint (after review)
Duration: 1-2 hours
Who: Scrum team only

- What went well?
- What could improve?
- What will we commit to improve?

5. Backlog Refinement (Grooming):

When: During sprint (not a formal ceremony)
Duration: ~10% of sprint
Who: PO + team

- Clarify upcoming stories
- Break down large items
- Estimate stories

Artifacts:
- Product Backlog: Prioritized list of all work
- Sprint Backlog: Work committed for sprint
- Increment: Working product at sprint end

Key Points to Look For:
- Knows all ceremonies
- Understands each role
- Can explain purpose of each

Follow-up: What's the most important Scrum ceremony?

3.

Sprint planning and estimation

How do you approach sprint planning and estimation?

Junior

Sprint Planning Process:

1. Capacity Planning:

// These LOOK similar but serve different purposes
class UserValidator {
    boolean isValid(User u) {
        return u.name != null && u.email != null;
    }
}

class OrderValidator {
    boolean isValid(Order o) {
        return o.id != null && o.total != null;
    }
}

// DON'T abstract into GenericValidator - they'll evolve differently

2. Story Selection:

// Over-DRY: Forced abstraction
class DataProcessor {
    void process(Object data, String mode) {
        if (mode.equals("user")) { /* user logic */ }
        else if (mode.equals("order")) { /* order logic */ }
    }
}

// Better: Allow duplication
class UserProcessor { void process(User u) { } }
class OrderProcessor { void process(Order o) { } }

3. Task Breakdown:

// Shared code couples components
// Change for UserService affects OrderService
class CommonUtils {
    static String format(Object o) { }  // Used by both
}

Estimation Techniques:

Story Points:

wzxhzdk:3

Planning Poker:

wzxhzdk:4

T-Shirt Sizing:

wzxhzdk:5

Velocity:

wzxhzdk:6

Best Practices:

wzxhzdk:7

Key Points to Look For:
- Uses relative estimation
- Understands velocity
- Involves whole team

Follow-up: How do you handle stories that turn out larger than estimated?

4.

Kanban vs Scrum

When would you use Kanban vs Scrum?

Mid

Scrum:
Time-boxed iterations (sprints).

Sprint 1 (2 weeks)  Sprint 2 (2 weeks)
[Planning|Work|Demo] [Planning|Work|Demo]

- Fixed sprint length
- Committed sprint backlog
- Roles: PO, SM, Team
- Ceremonies: Planning, standup, review, retro

Kanban:
Continuous flow, no sprints.

Backlog → In Progress → Review → Done
  [●●●]      [●●]        [●]     [●●●●]

- Continuous delivery
- WIP limits
- Pull system
- No prescribed roles

Comparison:

Aspect Scrum Kanban
Cadence Fixed sprints Continuous
Commitment Sprint backlog None
Roles Prescribed Flexible
Change Wait for next sprint Anytime
Planning Sprint planning On-demand
Metrics Velocity Cycle time, throughput

Use Scrum When:
- New product development
- Need regular release rhythm
- Team benefits from structure
- Stakeholders want predictable delivery
- Clear sprint goals possible

Use Kanban When:
- Support/maintenance work
- Unpredictable incoming work
- Need to release continuously
- Mature team, less structure needed
- Operations/DevOps work

Hybrid (Scrumban):

Scrum ceremonies + Kanban flow
- Sprint planning but continuous delivery
- WIP limits within sprint
- Common in teams transitioning

Key Points to Look For:
- Knows key differences
- Can recommend based on context
- Understands trade-offs

Follow-up: How would you transition a team from Scrum to Kanban?

5.

Technical debt: managing and communicating

How do you manage and communicate technical debt?

Mid

What is Technical Debt:
Shortcuts taken that need future work to fix.

Types:

Deliberate: "Ship now, refactor later"
Accidental: "Didn't know better approach"
Bit rot: "Code aged, needs updating"

Examples:

- No tests (risky changes)
- Outdated dependencies (security risk)
- Duplicated code (maintenance burden)
- Poor abstractions (hard to extend)
- Missing documentation (onboarding cost)

Tracking:

// Code comments
// TODO: Refactor this to use new API
// TECH-DEBT: Performance issue with N+1 queries

// Dedicated backlog
Jira: Label "tech-debt"
Priority: Impact × Likelihood of needing to change

Managing:

1. Dedicate Capacity:

Option A: 20% of each sprint for tech debt
Option B: One "cleanup" sprint per quarter
Option C: Boy Scout rule (leave code cleaner)

2. Prioritize by Impact:

High: Blocking new features, security risk
Medium: Slowing development
Low: Minor inconvenience

3. Pay Down Incrementally:

Don't: "Let's stop features for 2 months to refactor"
Do: "Each sprint, improve one area while adding features"

Communicating to Stakeholders:

1. Use Business Language:

Not: "We have technical debt in the authentication module"
But: "Login takes 5 seconds due to outdated code.
      Fixing it will reduce to 1 second, improving user retention."

2. Show Impact:

"Adding new payment method takes 2 weeks now.
 After cleanup: 2 days.
 Investment: 1 sprint.
 ROI: Faster delivery of requested features."

3. Visualize:

Graph: Development velocity declining over time
       Tech debt reducing feature delivery speed

Key Points to Look For:
- Tracks debt systematically
- Prioritizes by impact
- Communicates in business terms

Follow-up: How do you prevent technical debt from accumulating?

6.

Definition of Done vs Acceptance Criteria

What's the difference between Definition of Done and Acceptance Criteria?

Junior

Definition of Done (DoD):
Universal checklist for ALL stories. Team-wide standard.

// "Train wreck" - reaching through objects
customer.getWallet().getMoney().getAmount();

// Problem: Customer knows about Wallet, Money internals
// If Wallet changes, this breaks

Acceptance Criteria (AC):
Specific conditions for ONE story. Story-specific.

// Customer hides internal structure
customer.pay(amount);

// Inside Customer:
void pay(double amount) {
    wallet.deduct(amount);
}

Comparison:

Aspect Definition of Done Acceptance Criteria
Scope All stories One story
Owner Team Product Owner
Changes Rarely Per story
Focus Quality/process Functionality

Together:

// Coupled to entire chain
order.getCustomer().getAddress().getCity().getZipCode();

// Only coupled to Order
order.getShippingZipCode();

Why Both Matter:

// If Address structure changes, only Address class changes
// Not all callers

Don't: Chain through objects you don't directly know.
- Distinguishes scope (all vs one)
- Knows both are needed
- Can give examples

Follow-up: Who defines the Definition of Done?


Engineering Practices

7.

Code review: what to look for

What do you look for when reviewing code?

Junior

Code Review Checklist:

1. Functionality:

□ Does it do what the story/ticket asks?
□ Edge cases handled?
□ Error handling appropriate?

2. Design:

□ Follows existing patterns in codebase
□ Appropriate abstractions
□ Single responsibility
□ Not over-engineered

3. Readability:

□ Clear naming (variables, functions, classes)
□ Self-documenting code
□ Comments where needed (why, not what)
□ Reasonable function/file length

4. Testing:

□ Tests present and meaningful
□ Edge cases tested
□ Tests readable
□ Good test names

5. Security:

□ Input validation
□ No SQL injection
□ No hardcoded secrets
□ Proper authorization checks

6. Performance:

□ No obvious N+1 queries
□ Appropriate data structures
□ No unnecessary loops
□ Reasonable complexity

Review Approach:

1. Understand the context (read ticket/PR description)
2. Run the code if possible
3. Review changes file by file
4. Focus on logic, not style (use linters)
5. Be constructive, not critical

Good Feedback:

// Bad
"This is wrong"

// Good
"Consider using a Map here instead of array search -
it would reduce lookup from O(n) to O(1)"

// Bad
"Why did you do it this way?"

// Good
"I see you used approach X. I've seen approach Y work well
for similar cases because of Z. What do you think?"

What NOT to Focus On:

- Style issues (automated by linters)
- Personal preferences
- Perfect vs good enough
- Rewriting the whole thing

Key Points to Look For:
- Systematic approach
- Constructive feedback
- Balances thoroughness with efficiency

Follow-up: How do you handle disagreements in code review?

8.

PR best practices: size, description, commits

What makes a good pull request?

Junior

Good PR Characteristics:

1. Right Size:

Ideal: 200-400 lines changed
Too big: Hard to review, mistakes slip through
Too small: Too many PRs to manage

If large, split into:
- Refactoring PR (no behavior change)
- Feature PR (uses refactored code)

2. Clear Description:

## Summary
Add password reset functionality for users.

## Changes
- Created `/reset-password` endpoint
- Added email sending service
- Created password reset email template

## Testing
- Unit tests for ResetService
- Integration test for endpoint
- Manual testing on staging

## Related
- Closes #123
- Depends on #120

## Screenshots (if UI change)
[Before/After images]

3. Clean Commit History:

# Good
feat: add password reset endpoint
feat: add email sending service
test: add password reset tests
docs: update API documentation

# Bad
fix stuff
WIP
more changes
final (for real this time)
asdfasdf

4. Self-Review First:

Before requesting review:
☐ Re-read all changes
☐ Remove debug code
☐ Check for typos
☐ Tests pass locally
☐ Lint passes

5. Focused Changes:

// Bad: PR includes unrelated changes
"Add login feature + fix typo in readme + update dependencies"

// Good: One logical change
"Add login feature"

PR Etiquette:

Author:

- Respond to comments promptly
- Don't force push after review started
- Mark conversations resolved
- Request re-review after changes

Reviewer:

- Review within 24 hours
- Be specific and constructive
- Approve when ready, not perfect
- Use "nit:" for minor suggestions

Key Points to Look For:
- Keeps PRs small
- Writes good descriptions
- Clean commit history

Follow-up: How do you handle a PR that's grown too large?

9.

CI/CD pipeline stages

What stages should a CI/CD pipeline have?

Mid

CI/CD Pipeline:

┌─────────────────────────────────────────────────────────┐
│                    CI/CD Pipeline                        │
│                                                          │
│  ┌──────┐   ┌──────┐   ┌──────┐   ┌──────┐   ┌──────┐  │
│  │Build │→  │ Test │→  │ Scan │→  │Deploy│→  │Verify│  │
│  └──────┘   └──────┘   └──────┘   └──────┘   └──────┘  │
└─────────────────────────────────────────────────────────┘

Stages:

1. Build:

build:
  - checkout code
  - install dependencies
  - compile/transpile
  - create artifacts (Docker image, JAR, etc.)

2. Test:

test:
  unit:
    - run unit tests
    - generate coverage report
    - fail if coverage < threshold

  integration:
    - spin up dependencies (DB, cache)
    - run integration tests

  e2e:
    - deploy to test environment
    - run Selenium/Cypress tests

3. Security Scan:

security:
  - SAST (Static Application Security Testing)
  - dependency vulnerability scan
  - secrets detection
  - container image scan

4. Quality Gate:

quality:
  - code coverage check (> 80%)
  - code quality score (SonarQube)
  - no critical security issues
  - all tests passing

5. Deploy Staging:

deploy-staging:
  - deploy to staging environment
  - run smoke tests
  - notify team

6. Deploy Production:

deploy-prod:
  requires:
    - manual approval OR
    - automatic after staging success
  steps:
    - deploy with zero downtime
    - canary/blue-green deployment
    - run health checks

7. Verify/Monitor:

verify:
  - health check endpoints
  - smoke tests
  - monitor error rates
  - rollback if issues detected

Example GitHub Actions:

name: CI/CD
on: [push, pull_request]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - run: npm ci
      - run: npm run build

  test:
    needs: build
    runs-on: ubuntu-latest
    steps:
      - run: npm test
      - run: npm run test:integration

  deploy:
    needs: test
    if: github.ref == 'refs/heads/main'
    runs-on: ubuntu-latest
    steps:
      - run: ./deploy.sh

Key Points to Look For:
- Includes testing stages
- Has security scanning
- Includes quality gates

Follow-up: How do you handle failed deployments?

10.

Feature flags and trunk-based development

How do feature flags enable trunk-based development?

Mid

Trunk-Based Development:
Everyone commits to main branch frequently.

main: ──●──●──●──●──●──●──●──●──●──●──
       ↑  ↑  ↑  ↑  ↑
    Small, frequent commits
    No long-lived branches

Problem:

How to ship incomplete features?
main always deployed → broken features in production?

Feature Flags (Solution):

# Code in production, but hidden
if feature_flags.is_enabled("new-checkout"):
    show_new_checkout()
else:
    show_old_checkout()

Benefits:

1. Decouple Deploy from Release:

Deploy: Code goes to production (technical)
Release: Feature available to users (business)

With flags: Deploy daily, release when ready

2. Gradual Rollout:

# Percentage rollout
if feature_flags.is_enabled("new-feature", user.id, percentage=10):
    # 10% of users see new feature

3. Quick Rollback:

Problem discovered?
Toggle flag off instead of deploying
Instant, no deployment needed

4. A/B Testing:

variant = feature_flags.get_variant("checkout-button", user.id)
if variant == "A":
    button_color = "blue"
else:
    button_color = "green"

Implementation:

# LaunchDarkly, Split, Unleash, or DIY

class FeatureFlags:
    def is_enabled(self, flag_name, user_id=None, percentage=100):
        flag = self.flags.get(flag_name)
        if not flag.enabled:
            return False

        if percentage < 100:
            # Consistent hashing for user
            hash_value = hash(f"{flag_name}:{user_id}") % 100
            return hash_value < percentage

        return True

Best Practices:

1. Clean up old flags (tech debt)
2. Test both paths
3. Have kill switch for emergencies
4. Document flag purpose and owner
5. Set expiration dates

Key Points to Look For:
- Understands deploy vs release
- Knows rollout strategies
- Mentions cleanup

Follow-up: How do you manage feature flag technical debt?

11.

Documentation: what and how much

What should be documented and how much is enough?

Junior

What to Document:

1. Architecture:

# System Architecture

## Overview
[High-level diagram]

## Components
- API Gateway: Routes requests
- Auth Service: Handles authentication
- User Service: User management

## Data Flow
[Sequence diagram for key flows]

2. API Documentation:

# OpenAPI/Swagger
/users/{id}:
  get:
    summary: Get user by ID
    parameters:
      - name: id
        type: integer
    responses:
      200: User object
      404: Not found

3. README:

# Project Name

## What it does
Brief description

## Getting Started
```bash
npm install
npm run dev

Configuration

Required environment variables

Contributing

How to contribute


**4. Decision Records:**
```markdown
# ADR 001: Use PostgreSQL

## Context
Need a database for user data

## Decision
PostgreSQL

## Rationale
- Team experience
- JSON support
- Free

## Consequences
- Need to manage migrations

What NOT to Document:

1. Self-Explanatory Code:

// Bad: Comment restates code
// Increment counter by one
counter++;

// Good: No comment needed
counter++;

2. Generated Docs:

/**
 * Gets the name.
 * @return the name
 */
public String getName() { return name; }
// Useless - automated doc generation

3. Volatile Information:

Don't: "Currently 5 servers in prod"
Do: "See infrastructure.tf for current setup"

How Much:

Enough that:
- New team member can understand system
- On-call can troubleshoot
- Future you can remember why

Not so much that:
- Docs are never updated
- Writing docs blocks development
- Duplicates information

Key Points to Look For:
- Focuses on "why" not "what"
- Documents architecture decisions
- Keeps docs maintainable

Follow-up: How do you keep documentation up to date?

12.

Incident response and postmortems

How do you handle incidents and conduct postmortems?

Mid

Incident Response Process:

1. Detection:

- Monitoring alerts
- Customer reports
- Internal discovery

"We've detected elevated error rates"

2. Triage:

Severity levels:
P1: System down, all users affected
P2: Major feature broken, many users
P3: Minor feature, some users
P4: Cosmetic, low impact

Assign severity → Determine response

3. Communication:

P1 Response:
- Notify on-call engineer
- Open incident channel (#incident-123)
- Status page update
- Regular stakeholder updates

Template:
"[INVESTIGATING] Users experiencing login failures.
 Team investigating. Next update in 30 min."

4. Mitigation:

Priority: Stop the bleeding
- Rollback if deployment caused
- Scale if load issue
- Disable feature if causing problems
- Fail over if server issue

Fix root cause AFTER mitigation

5. Resolution:

- Confirm issue resolved
- Monitor for recurrence
- Update status page
- Close incident channel

Postmortem (Blameless):

# Incident Postmortem: Login Failure 2024-01-15

## Summary
Users unable to login for 45 minutes due to
database connection exhaustion.

## Timeline
09:15 - Alerts fired for login failures
09:20 - On-call acknowledged, began investigation
09:35 - Identified database connection issue
09:45 - Rolled back recent deployment
10:00 - Service restored

## Root Cause
Recent deployment removed connection pooling,
causing connections to exhaust under load.

## Impact
- 45 minutes downtime
- ~5000 users affected
- ~$10,000 estimated lost revenue

## What Went Well
- Fast detection (5 min)
- Good communication
- Quick rollback

## What Could Be Improved
- Code review missed connection change
- No load testing before deploy
- Slow initial investigation

## Action Items
1. Add connection pool monitoring [owner: Alice, due: Jan 20]
2. Add load testing to pipeline [owner: Bob, due: Jan 30]
3. Update code review checklist [owner: Carol, due: Jan 18]

## Lessons Learned
Infrastructure changes need explicit review

Blameless Culture:

Focus on systems, not people
"The process allowed this" not "Bob caused this"
Goal: Prevent recurrence, not assign blame

Key Points to Look For:
- Has structured process
- Conducts blameless postmortems
- Creates action items

Follow-up: How do you ensure action items get completed?


Collaboration

13.

Estimating software tasks

How do you estimate software tasks?

Junior

Estimation Approaches:

1. Break Down the Work:

"Build user authentication"
↓
- Research auth libraries (2h)
- Set up auth provider (4h)
- Create login endpoint (3h)
- Create logout endpoint (1h)
- Write tests (4h)
- Documentation (2h)
Total: 16 hours (add buffer: 20-24h)

2. Use Comparisons:

"How does this compare to similar past work?"

Past: User profile page = 3 days
New: User settings page = similar complexity = ~3 days

3. Three-Point Estimation:

Optimistic: 2 days (everything goes right)
Realistic: 4 days (normal challenges)
Pessimistic: 8 days (major issues)

Expected = (O + 4×R + P) / 6
         = (2 + 16 + 8) / 6
         = 4.3 days

4. T-Shirt Sizing (Rough):

XS: Hours
S: 1-2 days
M: 3-5 days
L: 1-2 weeks
XL: Need to break down

"This feels like a Medium"

What to Include:

☐ Implementation time
☐ Testing
☐ Code review iterations
☐ Documentation
☐ Meetings/communication
☐ Buffer for unknowns (20-30%)

Common Estimation Mistakes:

1. Optimism bias (best case only)
2. Forgetting non-coding work
3. Not accounting for interruptions
4. Underestimating unknowns
5. Pressure to give low estimates

Communication:

"I estimate 3-5 days for this task.
The uncertainty is around the third-party API integration.
I'll have a better estimate after initial investigation."

Key Points to Look For:
- Breaks down work
- Includes buffer
- Communicates uncertainty

Follow-up: How do you handle pressure to reduce estimates?

14.

Communicating technical concepts to non-technical stakeholders

How do you explain technical concepts to non-technical people?

Mid

Principles:

1. Use Analogies:

// Bad: Does multiple things
void createUser(UserDTO dto) {
    // Validate
    if (dto.email == null) throw new ValidationException();
    if (dto.name.length() < 2) throw new ValidationException();

    // Transform
    User user = new User();
    user.setEmail(dto.email.toLowerCase());
    user.setName(formatName(dto.name));

    // Persist
    repository.save(user);

    // Notify
    emailService.sendWelcome(user);
    analyticsService.track("user_created", user.getId());
}

// Good: Each function does one thing
void createUser(UserDTO dto) {
    validate(dto);
    User user = transformToUser(dto);
    save(user);
    notifyCreation(user);
}

2. Focus on Impact:

// Bad: Mixed abstraction levels
void processOrder() {
    validateOrder();  // High level
    if (order.total > 100) {  // Low level detail
        discount = order.total * 0.1;
    }
    sendConfirmation();  // High level
}

// Good: Consistent abstraction
void processOrder() {
    validateOrder();
    applyDiscounts();
    sendConfirmation();
}

3. Avoid Jargon:

wzxhzdk:2

4. Use Visuals:

wzxhzdk:3

5. Connect to Business:

wzxhzdk:4

Examples:

Explaining Downtime:

wzxhzdk:5

Explaining Technical Debt:

wzxhzdk:6

Explaining Estimation Uncertainty:

wzxhzdk:7

j)Length Guidelines:- Uses relatable analogies
- Loop variables: short (- Focuses on business impact
- Local variables: medium- Avoids unnecessary jargon

Follow-up: How do you handle when stakeholders push back on technical needs?

15.

Handling disagreements on technical decisions

How do you handle disagreements on technical decisions?

Mid

Process:

1. Understand the Other Position:

"Help me understand why you prefer approach X"
"What concerns do you have about approach Y?"

Listen fully before responding

2. Focus on Facts, Not Opinions:

Not: "I think React is better than Angular"
But: "React has faster initial render times
     (benchmarks) and our team has more experience
     (3 devs vs 1)"

3. Define Evaluation Criteria:

"Let's agree on what matters for this decision:
- Performance
- Team expertise
- Maintenance cost
- Time to implement

Then evaluate options against these"

4. Propose Experiments:

"We disagree on which approach scales better.
Can we build a small proof-of-concept
to test with realistic load?"

5. Time-box the Discussion:

"Let's spend 30 minutes discussing this.
If we can't agree, we'll [escalate/vote/try both]"

6. Disagree and Commit:

"I still think approach Y is better,
but I understand the team prefers X.
I'll commit fully to making X succeed."

What NOT to Do:

- Make it personal
- Keep arguing after decision made
- Undermine the chosen approach
- Say "I told you so" if it fails

Escalation Path:

1. Team discussion
2. Tech lead/architect input
3. Decision record (ADR)
4. Manager involvement (last resort)

Example:

Situation: Disagreement on SQL vs NoSQL

"I see the value in NoSQL for flexibility.
My concern is our team's SQL expertise and
the need for transactions in payments.

Can we:
1. List the specific requirements?
2. Evaluate both against those?
3. Prototype the critical path in both?"

Key Points to Look For:
- Seeks to understand first
- Uses objective criteria
- Commits once decided

Follow-up: What do you do if you strongly disagree with the final decision?

16.

Mentoring junior developers

How do you mentor junior developers effectively?

Senior

Mentoring Principles:

1. Guide, Don't Solve:

Not: "Here's the solution"
But: "What approaches have you tried?
     What do you think would happen if...?"

Help them develop problem-solving skills

2. Scaffold Learning:

Week 1: Pair on simple bug fix (observe)
Week 2: Fix bug while I observe (support)
Week 3: Fix independently, review after
Week 4: Own small feature

3. Create Safe Environment:

"It's okay to not know things"
"Questions show you're learning"
"Mistakes are learning opportunities"
"I don't know either - let's figure out together"

4. Regular 1:1s:

Weekly 30-min meetings:
- How's the work going?
- What are you learning?
- What's confusing?
- Career goals discussion
- Feedback both ways

5. Code Review as Teaching:

Not: "This is wrong"
But: "This works! For future reference,
     X pattern helps with Y because Z.
     Would you like to refactor, or leave for next time?"

6. Appropriate Challenges:

Too easy: Boring, no growth
Too hard: Frustrating, discouraging

Find the "stretch zone":
Can do with effort and support

Practical Tips:

Pair Programming:

- Start with them driving, you navigating
- Explain your thinking out loud
- Let them struggle (a little)
- Celebrate small wins

Documentation:

Create learning resources:
- Common patterns in codebase
- How to debug X
- Our deployment process
- Code review checklist

Growth Tracking:

Skills matrix:
- Current level
- Target level
- Learning resources
- Progress markers

Key Points to Look For:
- Creates psychological safety
- Balances guidance with independence
- Invests time in growth

Follow-up: How do you handle a junior who isn't improving?

17.

Architecture Decision Records (ADRs)

What are Architecture Decision Records and why use them?

Senior

ADR (Architecture Decision Record):
Document capturing important architectural decisions.

Why:

- Future team understands "why"
- Avoids re-debating same decisions
- Onboarding new team members
- Audit trail for changes

Template:
```markdown

ADR 001: Use PostgreSQL for Primary Database

Status

Accepted | Proposed | Deprecated | Superseded

Date

2024-01-15

Context

We need a primary database for user and transaction data.
Requirements:
- ACID compliance for transactions
- JSON support for flexible schemas
- Horizontal scaling potential
- Team familiarity

Decision

Use PostgreSQL 15

Alternatives Considered

18.

MySQL

Mid
19.

MongoDB

Mid
20.

Positive

Mid
21.

Negative

Mid
22.

Risks

Mid
23.

Giving constructive code review feedback

How do you give constructive feedback in code reviews?

Junior

Goal: Improve code AND maintain positive relationships.

Principles:

1. Focus on Code, Not Person:

// Before your change
void processOrder(Order o) {
    // existing code
}

// After your change (improved naming, added feature)
void processOrder(Order order) {
    validateOrder(order);  // Your new feature
    // existing code (improved variable name)
}

2. Be Specific and Actionable:

int d; → int daysSinceModified;

3. Explain the Why:

if (age >= 18) → if (age >= MINIMUM_ADULT_AGE)

4. Offer Alternatives, Not Just Criticism:


// Delete commented-out code you encounter

5. Use Questions, Not Commands:


// If you notice a potential NPE

Comment Categories:


// Fix inconsistent indentation

Praise Template:


Code Review Checklist:
☑ Feature implemented
☑ Tests pass
☑ At least one small improvement made

Tone Checklist:


wzxhzdk:7


Key Points to Look For:

- Focuses on code, not person

- Provides specific, actionable feedback

- Explains reasoning

- Maintains positive tone


Follow-up: How do you handle when someone pushes back on your feedback?


24.

Receiving critical feedback professionally

How do you handle receiving critical feedback about your work?

Junior

Mindset: Feedback is a gift that helps you grow.

In the Moment:

1. Listen Without Defending:

Bad:  "But I did it this way because..." (interrupting)
Good: "I hear you. Can you tell me more about what you noticed?"

Bad:  "That's not fair, you don't understand the constraints"
Good: "I want to make sure I understand the concern fully."

2. Ask Clarifying Questions:

"Can you give me a specific example?"
"What would you have expected instead?"
"What would success look like here?"
"On a scale of 1-10, how significant is this issue?"

3. Thank the Person:

"Thanks for pointing that out. I appreciate you taking
 the time to give me this feedback."

"I hadn't considered that perspective. This is helpful."

Processing Feedback:

1. Separate Facts from Feelings:

What they said: "This code is hard to follow"
How it felt: "I'm a bad developer"

Focus on: "What specific aspects are confusing?
          How can I make it clearer?"

2. Look for Patterns:

One person says code is complex → Maybe personal preference
Multiple people say same thing → Worth addressing

Ask yourself:
"Have I heard this feedback before?"
"Is there a pattern I'm missing?"

3. Take Action:

1. Acknowledge: "You're right, this could be clearer"
2. Commit: "I'll refactor this section"
3. Follow up: "I've updated it - does this work better?"

When You Disagree:

Not: "You're wrong"
But: "I see it differently. Here's my reasoning..."

Steps:
1. Acknowledge their point: "I understand your concern about X"
2. Share your perspective: "My thinking was..."
3. Seek understanding: "What am I missing?"
4. Commit once resolved: "That makes sense, I'll change it"
                     OR "Can we agree to disagree on style?"

Red Flags to Avoid:

❌ Getting defensive immediately
❌ Making excuses
❌ Dismissing without consideration
❌ Taking it personally
❌ Holding grudges

Growth Mindset:

Fixed: "I should already know this"
Growth: "Now I know this for next time"

Fixed: "They think I'm incompetent"
Growth: "They're investing in my development"

Fixed: "I'll never be good enough"
Growth: "This is one area I can improve"

Key Points to Look For:
- Listens without being defensive
- Asks clarifying questions
- Shows willingness to improve
- Separates self from work

Follow-up: How do you handle feedback you fundamentally disagree with?

25.

Managing up: communicating delays to management

How do you communicate project delays or bad news to management?

Mid

Principles:

1. Early and Proactive:

Bad:  Day before deadline: "We're not going to make it"
Good: Week before: "I see a risk to the timeline. Here's what I know..."

The earlier you communicate, the more options available.

2. Come with Information, Not Just Problems:

Template:

1. THE SITUATION (What happened)
   "We've encountered an issue with the payment integration."

2. THE IMPACT (What it means)
   "This puts our March 15 deadline at risk by 1-2 weeks."

3. THE OPTIONS (What we can do)
   "We have three options:
    A) Reduce scope by removing feature X
    B) Add a contractor to parallel-track
    C) Push the deadline to March 30"

4. MY RECOMMENDATION
   "I recommend option C because..."

5. WHAT I NEED FROM YOU
   "I need your decision by Friday to adjust the plan."

3. Quantify Impact:

Bad:  "This will take longer than expected"
Good: "This will add 5-7 days to the timeline"

Bad:  "There might be issues"
Good: "There's a 60% chance we miss the deadline without changes"

4. Own the Situation:

Bad:  "It's not my fault, the requirements changed"
Good: "We underestimated the complexity. Here's how we're adjusting."

Bad:  "No one told me about X"
Good: "In hindsight, I should have asked about X earlier."

Conversation Example:

"I wanted to give you a heads up on the API project.

We've hit an issue with the third-party service integration
that's more complex than we estimated. This puts our June 1st
deadline at risk.

Current status: We're 70% done, but the remaining 30% depends
on this integration.

Impact: If we don't solve this, we'll likely need 2 more weeks.

Options I see:
1. Ship without the integration feature (removes 20% of value)
2. Push launch to June 15 (full feature set)
3. Parallel-track with another developer (adds cost, same timeline)

My recommendation is option 2, because the integration is
the key differentiator for this release.

I'd like your input on this by Thursday so I can adjust
the plan. What questions do you have?"

Follow-Up Best Practices:

1. Send written summary after verbal conversation
2. Update regularly (even if status is same)
3. Highlight if situation improves or worsens
4. Close the loop when resolved

What NOT to Do:

❌ Surprise at the last minute
❌ Only report problems, never solutions
❌ Blame others
❌ Minimize or hide the issue
❌ Over-promise to compensate

Key Points to Look For:
- Communicates proactively
- Provides options and recommendations
- Takes ownership
- Quantifies impact

Follow-up: How do you handle a manager who doesn't want to hear bad news?

26.

Onboarding: How to ramp up quickly on a new codebase

How do you ramp up quickly when joining a new team or project?

Junior

Week 1: Understand the Landscape

1. Documentation First:

Read in order:
1. README - Project overview, setup instructions
2. Architecture docs - How pieces fit together
3. API documentation - Key interfaces
4. Recent design docs - Current thinking
5. Onboarding guide (if exists)

2. Get the App Running:

# Clone, build, run locally
# Don't skip this - reveals assumptions

Ask: "What's the happy path?"
Run through it as a user

3. Map the Codebase:

Identify:
- Entry points (main.py, index.js, App.java)
- Folder structure patterns
- Key abstractions/interfaces
- Configuration files
- Test locations

Week 2-4: Learn by Doing

1. Start with Small Tasks:

Good first tasks:
- Fix a small bug
- Add a log message
- Update documentation
- Write a missing test
- Small UI tweak

Each task teaches:
- How to find code
- How to test changes
- How to submit PRs
- Team norms and style

2. Read Recent PRs:

Look for:
- Code style conventions
- PR description format
- Review process
- Common patterns

3. Debug Something:

Set breakpoints, step through code
"How does X actually work?"
Better than reading docs alone

Building Relationships:

1. Schedule 1:1s:

Meet with:
- Manager (expectations, priorities)
- Tech lead (architecture, history)
- Key team members (domain knowledge)
- Adjacent teams (dependencies)

Ask:
"What should I know that isn't documented?"
"What do you wish you knew when you started?"
"What's the biggest pain point?"

2. Find a Buddy:

Someone to ask "dumb questions"
Ideally joined recently (remembers onboarding)
Pair program with them

Strategies for Understanding:

1. Trace a Request:

Pick a feature (e.g., "user login")
Follow code from UI → API → Database
Document what you learn

2. Draw Diagrams:

Create your own architecture diagrams
Share for feedback
"Does this look right?"

3. Write Down Questions:

Keep a running list
Batch them for 1:1s
Look up answers before asking

4. Improve as You Learn:

Found confusing code? Add comments
Setup was painful? Update README
Process unclear? Document it

You have fresh eyes - use them

30-60-90 Day Mindset:

30 days: Learn the system, ship small fixes
60 days: Own a feature, contribute to design
90 days: Identify improvements, mentor new joiners

What NOT to Do:

❌ Suffer in silence (ask questions!)
❌ Try to change everything immediately
❌ Skip documentation
❌ Avoid pairing/meetings
❌ Only read code, never run it

Key Points to Look For:
- Structured approach
- Balances reading and doing
- Builds relationships
- Contributes early

Follow-up: How do you handle a codebase with poor documentation?