These laws originate from Dr. Heinz Kabutz's acclaimed "Secrets of Concurrency" series published in the JavaSpecialists Newsletter.
Statement: Instead of arbitrarily suppressing interruptions, manage them properly and respect the interrupt mechanism.
Detailed Explanation:
Thread interruption is Java's cooperative cancellation mechanism. When a thread is interrupted, it's a signal that the thread should stop what it's doing and clean up. Simply catching InterruptedException and doing nothing (or worse, suppressing it) is like disconnecting a doorbell—messages never get through.
Best Practices:
- Never suppress interrupts: Always restore the interrupt status if you can't throw the exception
- Propagate or restore: Either throw
InterruptedExceptionor callThread.currentThread().interrupt() - Clean up properly: Use try-finally blocks to release resources before exiting
Example:
// ❌ BAD: Swallowing the interrupt
public void badExample() {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
// Do nothing - interrupt lost!
}
}
// ✅ GOOD: Restoring interrupt status
public void goodExample() {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
Thread.currentThread().interrupt(); // Restore interrupt
// Clean up and exit
}
}
// ✅ GOOD: Propagating the exception
public void anotherGoodExample() throws InterruptedException {
Thread.sleep(1000); // Let caller handle it
}Modern Context: With virtual threads in Java 21+, proper interrupt handling becomes even more critical for responsive applications. Virtual threads are designed for high concurrency with blocking I/O operations.
Statement: When debugging concurrent code, focus intensely on understanding one thread at a time. Jumping between threads without full understanding leaves you with nothing.
Detailed Explanation:
Just as a spearfisher must commit to targeting one fish rather than being distracted by every fish in the school, developers must analyze thread dumps methodically. Understanding what every thread is doing requires focused, systematic analysis of each thread's state, stack trace, and lock holdings.
Best Practices:
- Generate thread dumps: Use
jstack, VisualVM, or JFR (Java Flight Recorder) - Analyze systematically: Examine each thread completely before moving to the next
- Understand thread states: RUNNABLE, BLOCKED, WAITING, TIMED_WAITING, TERMINATED
- Track lock ownership: Modern JVMs show which locks are held and waited upon
Modern Tools:
- JDK Mission Control + Flight Recorder: Real-time profiling and thread analysis
- Async Profiler: Low-overhead profiling with flamegraphs
- ThreadMXBean: Programmatic deadlock detection
Example - Thread Dump Analysis:
// Modern deadlock detection
ThreadMXBean threadMXBean = ManagementFactory.getThreadMXBean();
long[] deadlockedThreads = threadMXBean.findDeadlockedThreads();
if (deadlockedThreads != null) {
ThreadInfo[] threadInfos = threadMXBean.getThreadInfo(deadlockedThreads);
for (ThreadInfo info : threadInfos) {
System.err.println("Deadlocked thread: " + info.getThreadName());
// Analyze stack trace and locks
}
}Statement: Creating too many threads degrades application performance, increases memory consumption, and makes debugging extremely difficult—even if most threads are inactive.
Detailed Explanation:
A haberdashery overstocked with threads (sewing threads!) creates inventory problems. Similarly, excessive platform threads in Java consume significant resources. Each platform thread typically requires 1MB of stack space and OS-level resources. Creating tens of thousands of threads can exhaust system resources, cause OutOfMemoryErrors, or even crash the JVM.
Key Insights:
- Active threads: Should correlate with CPU cores (typically 2-4x core count)
- Inactive threads: Still consume stack memory and complicate debugging
- Thread creation cost: Not negligible despite common myths
Best Practices:
| Approach | Use Case | Benefits |
|---|---|---|
| Fixed Thread Pool | Bounded workload | Prevents resource exhaustion |
| Cached Thread Pool | Short-lived, many tasks | Reuses threads efficiently |
| Virtual Threads (Java 21+) | High-concurrency I/O | Millions of threads possible |
| Structured Concurrency | Task hierarchies | Simplified lifecycle management |
Example:
// ❌ BAD: Creating threads directly
for (int i = 0; i < 100_000; i++) {
new Thread(() -> doWork()).start(); // Resource disaster!
}
// ✅ GOOD: Using thread pool (traditional)
ExecutorService executor = Executors.newFixedThreadPool(
Runtime.getRuntime().availableProcessors() * 2
);
for (int i = 0; i < 100_000; i++) {
executor.submit(() -> doWork());
}
executor.shutdown();
// ✅ EXCELLENT: Using virtual threads (Java 21+)
try (var executor = Executors.newVirtualThreadPerTaskExecutor()) {
for (int i = 0; i < 100_000; i++) {
executor.submit(() -> doWork());
}
} // Auto-shutdown with structured concurrencyModern Context: Virtual threads fundamentally change this law for I/O-bound tasks. You can now create millions of virtual threads with minimal overhead, as they're lightweight and managed by the JVM rather than the OS. However, for CPU-bound tasks, the traditional guidance still applies.
Statement: Without proper synchronization, threads cannot reliably see updates made by other threads to shared data. Visibility is not guaranteed.
Detailed Explanation:
The Java Memory Model allows threads to cache field values locally. Like a blind spot when changing lanes, one thread may not see changes made by another thread without proper "mirrors" (synchronization). This leads to the classic "check-then-act" race condition.
Technical Details: The JMM provides no visibility guarantees for unsynchronized access to shared mutable state. A thread might loop forever on a flag that another thread has changed.
Solutions:
| Technique | When to Use | Guarantees |
|---|---|---|
| synchronized | General-purpose locking | Mutual exclusion + visibility |
| volatile | Simple flags, non-compound operations | Visibility only (no atomicity) |
| final | Immutable values | Cheapest visibility guarantee |
| java.util.concurrent.locks | Complex locking scenarios | Flexible locking + visibility |
| java.util.concurrent.atomic | Atomic operations | Lock-free thread safety |
| VarHandle (Java 9+) | Fine-grained memory ordering | Advanced control |
Example:
// ❌ BAD: No visibility guarantee
public class UnsafeFlag {
private boolean running = true;
public void stop() {
running = false; // May never be seen by run()
}
public void run() {
while (running) {
// May loop forever!
}
}
}
// ✅ GOOD: Using volatile
public class SafeFlag {
private volatile boolean running = true;
public void stop() {
running = false; // Guaranteed visible
}
public void run() {
while (running) {
// Will exit properly
}
}
}
// ✅ GOOD: Using AtomicBoolean
public class AtomicFlag {
private final AtomicBoolean running = new AtomicBoolean(true);
public void stop() {
running.set(false);
}
public void run() {
while (running.get()) {
// Thread-safe and visible
}
}
}Statement: The JVM and CPU can reorder statements for optimization, causing field values to be visible in seemingly impossible orders. Without synchronization, writes may appear "leaked" prematurely.
Detailed Explanation:
The Java Memory Model allows statement reordering as long as single-threaded semantics are preserved. However, from another thread's perspective, operations may appear to execute in a different order. This is like leaving a confidential memo in the copier—it leaks information before you intended.
Classic Example:
public class EarlyWrites {
private int x = 0;
private int y = 0;
// Thread 1
public void writer() {
x = 1;
y = 2;
}
// Thread 2
public void reader() {
int a = y; // Could see 2
int b = x; // Could still see 0!
}
}Without synchronization, the reader might see y=2 but x=0, even though x=1 was written first in program order.
The Happens-Before Relationship:
Synchronization establishes happens-before edges that prevent reordering:
- Unlock happens-before subsequent lock on same monitor
- Write to volatile happens-before subsequent read of that volatile
- Thread start happens-before any action in that thread
- Actions in a thread happen-before that thread's join
Solutions:
// ✅ Fix 1: Synchronization
public class Synchronized {
private int x = 0;
private int y = 0;
private final Object lock = new Object();
public void writer() {
synchronized (lock) {
x = 1;
y = 2;
}
}
public void reader() {
synchronized (lock) {
int a = y;
int b = x; // Guaranteed to see updates
}
}
}
// ✅ Fix 2: Volatile (for reference assignment)
public class VolatileFix {
private int x = 0;
private volatile int y = 0;
public void writer() {
x = 1;
y = 2; // Volatile write flushes x too
}
public void reader() {
int a = y; // Volatile read
int b = x; // Guaranteed to see x=1 if y=2
}
}Double-Checked Locking: This pattern is broken without volatile because of early writes:
// ❌ BROKEN without volatile
private Singleton instance;
public Singleton getInstance() {
if (instance == null) { // Check 1
synchronized (this) {
if (instance == null) { // Check 2
instance = new Singleton(); // Can leak!
}
}
}
return instance;
}
// ✅ FIXED with volatile
private volatile Singleton instance;Statement: Without adequate synchronization controls, data corruption from concurrent access is inevitable. Power corrupts; unsynchronized concurrent access corrupts data.
Detailed Explanation:
When multiple threads access shared mutable state without synchronization, data races corrupt object state. Like politicians without oversight, unsupervised concurrent operations lead to corruption.
Classic Example - Bank Account:
// ❌ UNSAFE: Data race on balance
public class UnsafeBankAccount {
private int balance;
public void deposit(int amount) {
balance += amount; // NOT ATOMIC!
// Actually: temp = balance; temp += amount; balance = temp;
}
}The += operation involves read-modify-write, which is not atomic. Two concurrent deposits can result in lost updates.
Detection Signs:
- Unexpected
NullPointerExceptionin "impossible" locations - Broken assertions or invariants
- Corrupted data structures (e.g., negative balances, broken XML DOM trees)
Proper Synchronization Solutions:
// ✅ Solution 1: synchronized method
public class SynchronizedAccount {
private int balance;
public synchronized void deposit(int amount) {
balance += amount;
}
public synchronized int getBalance() {
return balance; // Must synchronize reads too!
}
}
// ✅ Solution 2: ReentrantLock
public class LockAccount {
private int balance;
private final Lock lock = new ReentrantLock();
public void deposit(int amount) {
lock.lock();
try {
balance += amount;
} finally {
lock.unlock();
}
}
}
// ✅ Solution 3: ReentrantReadWriteLock (many readers, few writers)
public class ReadWriteAccount {
private int balance;
private final ReadWriteLock rwLock = new ReentrantReadWriteLock();
public void deposit(int amount) {
rwLock.writeLock().lock();
try {
balance += amount;
} finally {
rwLock.writeLock().unlock();
}
}
public int getBalance() {
rwLock.readLock().lock();
try {
return balance;
} finally {
rwLock.readLock().unlock();
}
}
}
// ✅ Solution 4: AtomicInteger (best for simple counters)
public class AtomicAccount {
private final AtomicInteger balance = new AtomicInteger();
public void deposit(int amount) {
balance.addAndGet(amount); // Atomic operation
}
public int getBalance() {
return balance.get();
}
}Modern Best Practices:
- Prefer
java.util.concurrentcollections (ConcurrentHashMap, CopyOnWriteArrayList) - Use atomic classes for simple counters and flags
- Consider immutable data structures to avoid synchronization entirely
- Use
@GuardedByannotations (from libraries like JCTools or ErrorProne) to document locking contracts
Statement: Excessive synchronization (over-locking or locking at wrong granularity) wastes resources, frustrates other threads, and creates performance bottlenecks through lock contention.
Detailed Explanation:
Just as micromanagers in organizations create inefficiency by controlling every detail, excessive synchronization serializes concurrent operations and prevents full CPU utilization. As multi-core systems proliferate, lock contention is becoming the primary performance bottleneck.
Anti-Patterns:
- Synchronizing on String constants:
// ❌ TERRIBLE: All instances share same lock!
private String LOCK = "MY_LOCK"; // Interned string
synchronized (LOCK) { // Every class using "MY_LOCK" contends!
// Critical section
}
// ✅ GOOD: Private lock object
private final Object lock = new Object();
synchronized (lock) {
// Critical section
}String literals are interned, so "MY_LOCK" in different classes points to the same object, causing system-wide contention!
- Synchronizing entire methods when only small sections need protection:
// ❌ BAD: Entire method synchronized
public synchronized void process() {
doExpensiveCalculation(); // Doesn't need lock
sharedState.update(); // Only this needs lock
doMoreExpensiveWork(); // Doesn't need lock
}
// ✅ GOOD: Minimal critical section
public void process() {
doExpensiveCalculation();
synchronized (lock) {
sharedState.update(); // Only what's necessary
}
doMoreExpensiveWork();
}Modern Solutions:
| Technique | Benefit | Use Case |
|---|---|---|
| Lock striping | Reduces contention | ConcurrentHashMap uses this |
| Lock-free algorithms | No blocking | AtomicInteger, ConcurrentLinkedQueue |
| ReadWriteLock | Multiple readers | Read-heavy workloads |
| StampedLock (Java 8+) | Optimistic reads | Very read-heavy scenarios |
| Virtual threads | Cheap blocking | Makes blocking acceptable |
Statement: The JVM does not enforce all concurrency rules. Code may work correctly on current JVMs but contain subtle bugs that surface on different architectures or future JVM versions.
Detailed Explanation:
Like Cretan drivers who ignore traffic rules without immediate consequences, Java code can violate the JVM specification without apparent problems—until you encounter stricter enforcement. The JVM spec was written for diverse hardware architectures, so some rules are recommendations rather than requirements.
Example - 64-bit Values:
The JVM spec states: "VM implementers are encouraged to avoid splitting their 64-bit values where possible. Programmers are encouraged to declare shared 64-bit values as volatile or synchronize their programs correctly."
// ❌ POTENTIALLY UNSAFE
public class LongFields {
private long value; // Not synchronized or volatile
public void set(long v) { value = v; }
public long get() { return value; }
}On some architectures, reading/writing long or double is not atomic. Two threads could see corrupted values like 0x11111111ABCD0000L when threads write 0x1234567811111111L and 0x12345678ABCD0000L.
Best Practices:
- Always synchronize shared mutable state (or use volatile/atomic classes)
- Test on multiple architectures (x86, ARM, different JVM implementations)
- Use concurrency testing tools: jcstress (Java Concurrency Stress tests)
- Follow the Java Memory Model strictly: Don't rely on observed behavior
// ✅ SAFE: Proper synchronization
public class SafeLongFields {
private volatile long value; // Or use AtomicLong
public void set(long v) { value = v; }
public long get() { return value; }
}Statement: Adding resources (faster CPUs, more cores, faster I/O, more memory) to a seemingly stable system can expose hidden concurrency bugs, making the system unstable.
Detailed Explanation:
Like lottery winners whose lives fall apart from sudden wealth, systems can fail when given better hardware. Faster hardware increases concurrency, exposing race conditions that occurred rarely on slower systems.
Real-World Example:
A company upgraded to a server 4x faster with more cores. The old server ran fine, but the new server occasionally prevented logins during high load. The root cause: a DOM tree corruption from a data race that manifested once a year on the old hardware, but weekly on the new hardware.
Why This Happens:
- Increased parallelism: More cores mean more true concurrency
- Different timings: Race condition windows that were rare become common
- Faster execution: Bugs hidden by slow I/O become visible
- Memory ordering differences: Different CPU architectures have different memory models
Prevention Strategies:
-
Assume hardware will get faster: Write correct concurrent code from the start
-
Load testing: Stress test with high concurrency before production
-
Concurrency testing tools:
- jcstress for finding race conditions
- Thread Sanitizer (if using native code)
- Chaos engineering to vary timings
-
Code reviews: Focus on concurrent access patterns
-
Static analysis: Tools like SpotBugs, ErrorProne with concurrency checks
Key Lesson: The bug existed all along. Better hardware didn't create the bug; it revealed it. Never assume performance improvements won't expose concurrency issues.
Statement: Deadlocks in Java can be detected using ThreadMXBean and sometimes resolved using interruption with Java 5+ locks, but monitor deadlocks usually require JVM restart.
Detailed Explanation:
In the classic "uneaten lutefisk" scenario, a stubborn parent and child deadlock over dinner. In Java, deadlocks occur when threads acquire locks in conflicting orders. The original law stated deadlocks require JVM restart, but modern Java provides detection and partial resolution capabilities.
Deadlock Detection:
// ✅ Programmatic deadlock detection
ThreadMXBean threadMXBean = ManagementFactory.getThreadMXBean();
// Detect monitor deadlocks only
long[] deadlockedMonitors = threadMXBean.findMonitorDeadlockedThreads();
// Detect all deadlocks (monitors + java.util.concurrent locks)
long[] deadlockedAll = threadMXBean.findDeadlockedThreads();
if (deadlockedAll != null) {
ThreadInfo[] infos = threadMXBean.getThreadInfo(deadlockedAll, true, true);
for (ThreadInfo info : infos) {
System.err.println("Deadlocked: " + info.getThreadName());
// Log stack traces, locks held, locks waiting for
}
}Resolution Strategies:
| Lock Type | stop() | interrupt() | Prevention |
|---|---|---|---|
| synchronized (monitor) | Breaks JVM | ❌ No effect | Lock ordering |
| ReentrantLock.lock() | ✅ Releases | ❌ No effect | Use lockInterruptibly() |
| ReentrantLock.lockInterruptibly() | ✅ Releases | ✅ Throws exception | Allows recovery |
| ReentrantLock.tryLock(timeout) | ✅ Releases | ✅ Times out | Self-healing |
Never use Thread.stop():
- Breaks class invariants
- Corrupts object state
- Breaks JVM deadlock detection
- Deprecated and unsafe
Best Prevention Strategies:
- Lock ordering: Always acquire locks in consistent order
// ✅ GOOD: Consistent ordering
public void transfer(Account from, Account to, int amount) {
Account first = (from.id < to.id) ? from : to;
Account second = (from.id < to.id) ? to : from;
synchronized (first) {
synchronized (second) {
from.debit(amount);
to.credit(amount);
}
}
}- Use tryLock with timeout:
// ✅ GOOD: Timeout prevents indefinite deadlock
Lock lock1 = new ReentrantLock();
Lock lock2 = new ReentrantLock();
if (lock1.tryLock(100, TimeUnit.MILLISECONDS)) {
try {
if (lock2.tryLock(100, TimeUnit.MILLISECONDS)) {
try {
// Critical section
} finally {
lock2.unlock();
}
}
} finally {
lock1.unlock();
}
}-
Use higher-level concurrency utilities:
CompletableFuturefor async operationsConcurrentHashMapinstead of synchronized MapExecutorServicefor task management- Structured concurrency (Java 21+) for automatic resource management
-
Dining Philosophers solution:
// ✅ GOOD: Break circular wait condition
public class Philosopher implements Runnable {
private final Lock leftChopstick;
private final Lock rightChopstick;
public void run() {
try {
while (!Thread.currentThread().isInterrupted()) {
think();
// Try to get both chopsticks with timeout
if (leftChopstick.tryLock(50, TimeUnit.MILLISECONDS)) {
try {
if (rightChopstick.tryLock(50, TimeUnit.MILLISECONDS)) {
try {
eat();
} finally {
rightChopstick.unlock();
}
}
} finally {
leftChopstick.unlock();
}
}
}
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
}- Prefer Virtual Threads for I/O-bound workloads:
try (var executor = Executors.newVirtualThreadPerTaskExecutor()) {
executor.submit(() -> handleRequest());
}- Use Structured Concurrency:
try (var scope = new StructuredTaskScope.ShutdownOnFailure()) {
Future<String> user = scope.fork(() -> fetchUser());
Future<String> order = scope.fork(() -> fetchOrder());
scope.join(); // Wait for all
scope.throwIfFailed(); // Propagate errors
return new Response(user.resultNow(), order.resultNow());
}-
Leverage java.util.concurrent:
ConcurrentHashMapfor shared mapsCopyOnWriteArrayListfor read-heavy listsBlockingQueuefor producer-consumer patternsCompletableFuturefor async pipelines
-
Immutability First:
- Use
recordclasses (Java 14+) for immutable data - Prefer functional transformations over mutation
- Consider Valhalla value types when available
- Use
-
Testing:
- Use jcstress for concurrency correctness
- Chaos testing with variable loads
- Thread sanitizers and race detectors
| Law | Core Issue | Primary Solution |
|---|---|---|
| 1. Sabotaged Doorbell | Suppressed interrupts | Restore interrupt status |
| 2. Distracted Spearfisherman | Unfocused debugging | Systematic thread analysis |
| 3. Overstocked Haberdashery | Too many threads | Thread pools / Virtual threads |
| 4. Blind Spot | Visibility issues | volatile / synchronized |
| 5. Leaked Memo | Instruction reordering | Happens-before relationships |
| 6. Corrupt Politician | Data races | Proper synchronization |
| 7. Micromanager | Lock contention | Minimal critical sections |
| 8. Cretan Driving | Unforced rules | Strict JMM compliance |
| 9. Sudden Riches | Hidden race conditions | Correct concurrent design |
| 10. Uneaten Lutefisk | Deadlocks | Detection + prevention |
These laws remain relevant in modern Java, though technologies like virtual threads and structured concurrency provide better tools for managing complexity. The fundamental principles of memory visibility, synchronization, and careful concurrent design remain essential.