Deconstructing the Kotlin Compiler and Bytecode
This is the finale of the Kotlin knowledge architecture. The preceding ten articles dissected various Kotlin language features—null safety, generics, coroutines, DSLs, interoperability. However, one overarching question remains unresolved: Exactly how do these high-level features morph into bytecode executable by the JVM?
The answer lies deep within the Kotlin compiler. Understanding the compiler isn't about accumulating trivia; it's about engineering an intuition: when you write a line of Kotlin code, your mind should simultaneously project its physical reality at the bytecode layer. This intuition is the foundational guarantee for engineering zero-defect, industrial-grade software.
I. The Compiler Pipeline
Before diving into granular mechanics, let's establish a global map of the terrain.
┌─────────────────────────────────────────────────────────────────┐
│ Kotlin Compiler Pipeline │
├─────────────┬──────────────────────────────┬───────────────────┤
│ Frontend │ Intermediate Representation │ Backend │
│ Phase │ (IR) │ Phase │
├─────────────┼──────────────────────────────┼───────────────────┤
│ │ │ ──→ JVM Bytecode │
│ Source Code │ │ (.class) │
│ ↓ │ │ │
│ Lexer │ │ ──→ Native Binary │
│ ↓ │ ┌───────────────────────┐ │ (LLVM) │
│ Parser │ │ Kotlin IR (IR) │ │ │
│ ↓ │ │ Platform-agnostic │ │ ──→ JavaScript │
│ Semantic │→→│ semantic model; the │→→ │ (.js) │
│ Analysis │ │ architectural core │ │ │
│ ↓ │ └───────────────────────┘ │ ──→ WebAssembly │
│ FIR (K2) │ │ (.wasm) │
│ │ │ │
└─────────────┴──────────────────────────────┴───────────────────┘
The entire execution vector is bifurcated into three major phases: the Frontend "reads and comprehends" your code, the IR acts as the central translation hub decoupling language from platform, and the Backend synthesizes the executable payload for a specific target architecture.
1.1 The Frontend: From Source to Semantic Model
The Frontend's primary directive is converting human-readable source code into a semantic model the compiler can manipulate. This involves four sub-phases:
Lexical Analysis (Lexer): Slices the raw character stream into a sequence of Tokens. val x = 1 + 2 is shattered into [val, IDENTIFIER(x), EQ, INT(1), PLUS, INT(2)]. This phase ignores meaning; it strictly performs symbol recognition.
Syntax Analysis (Parser): Assembles the Token sequence into a Concrete Syntax Tree (CST/PSI). This tree models the grammatical structure—which Tokens form an expression, which form a function declaration.
Semantic Analysis: The true "heavy lifting" of the frontend. The compiler traverses the syntax tree to perform type inference, name resolution, overload resolution, and visibility checks... elevating "understanding grammar" to "comprehending meaning."
FIR Generation (K2 Exclusive): The K2 compiler introduces a radically new frontend data structure, which we will analyze in the following section.
1.2 IR: The Universal Highway Between Language and Platform
The output of the frontend is ultimately compiled into Kotlin IR (Intermediate Representation). IR is a programmatic description decoupled from both the specific language syntax and the target platform architecture. It utilizes a unified data structure to express: function invocations, type casts, control flow, object instantiation...
The architectural necessity of IR is profound: All backends share the identical IR. This dictates that an optimization (e.g., inline expansion) only needs to be engineered once at the IR layer, and the JVM, Native, JS, and Wasm backends inherit the performance gains simultaneously. This is the bedrock of Kotlin's Multiplatform strategy.
1.3 The Backend: From IR to Executable Payload
The backend is a platform-exclusive code generator. It ingests the IR and synthesizes the executable payload for the target environment:
- Kotlin/JVM Backend: Translates IR into
.classbytecode conforming to the JVM specification. - Kotlin/Native Backend: Feeds IR into the LLVM toolchain, where LLVM synthesizes native machine code (ARM64, x86_64, etc.).
- Kotlin/JS Backend: Transpiles IR into JavaScript source code.
- Kotlin/Wasm Backend: Compiles IR into the WebAssembly binary format.
II. The K2 Compiler: A Total Architectural Revolution
The most monumental shift in Kotlin 2.0 isn't at the language syntax layer, but buried deep within the compiler. The K2 compiler represents a total rewrite of the frontend architecture, driven by a chronic, compounding technical debt: K1's (the legacy compiler) BindingContext had become an insurmountable bottleneck for both performance and maintainability.
2.1 The Legacy Debt of K1: The Cost of BindingContext
To comprehend why K2 was necessary, we must analyze how K1 operated.
After semantic analysis, the K1 compiler (internal codename FE10) stored all resolved semantic data within a monolithic, global container named BindingContext. Architecturally, visualize this as a colossal "Map of Maps":
BindingContext {
EXPRESSION_TYPE_INFO: Map<KtExpression, ExpressionTypeInfo>
RESOLVED_CALL: Map<Call, ResolvedCall>
REFERENCE_TARGET: Map<KtReferenceExpression, DeclarationDescriptor>
... // Dozens of sub-Maps
}
Whenever the backend required type information for a specific node, it was forced to query this massive Map. The architectural flaws were fatal:
- Performance Bottleneck: Map lookups inherently carry overhead. Critically, K1's parsing was "lazy"—vast amounts of semantic data were computed only upon query, leading to massive recomputation and highly convoluted caching logic.
- Maintenance Nightmare: Decoupling semantic metadata from the physical syntax nodes meant that modifying code easily fractured the synchronization between the two structures.
- Cross-Platform Fragmentation: In the K1 era, the JVM, JS, and Native backends each maintained distinct frontend-to-IR translation logic (
Psi2Ir). A single frontend bug frequently required patches across three separate codebases.
2.2 The K2 Solution: FIR—The Semantically Enriched AST
K2 violently replaced the BindingContext paradigm with FIR (Frontend Intermediate Representation).
FIR is not a standard syntax tree; it is a semantically enriched AST. Type intelligence, resolution outcomes, and overload decisions are embedded directly into the physical tree nodes, completely eliminating external Maps.
Employing a library metaphor: K1 operated a Decoupled Index System—the books (syntax nodes) were on the shelf, but their metadata (semantic info) was isolated in a massive card catalog. Finding data required bouncing between the shelf and the catalog. K2 operates an Embedded Data System—every book has its title, author, and ISBN physically stamped onto its spine. Picking up the book instantly yields all its metadata.
The complete K2 compilation pipeline:
Source Code
│
▼ Lexical/Syntax Analysis
PSI (Concrete Syntax Tree)
│
▼ FIR Generation
Raw FIR (Unresolved)
│
▼ FIR Resolution (Type Inference + Name Resolution + Overload Resolution)
Resolved FIR (Semantically Complete)
│
▼ FIR Checkers (Diagnostic Reporting: Warnings + Errors)
│
▼ Fir2Ir Transformation
Kotlin IR (Identical format to K1's Psi2Ir output)
│
▼ Platform Backends
.class / Native Binary / .js / .wasm
Notice the critical architectural pivot: The output of Fir2Ir is identical to the output of K1's Psi2Ir. This dictates that the backend codebases required zero modifications; the monumental performance gains of K2 are isolated entirely within the frontend.
2.3 Quantifying K2's Performance Gains
JetBrains' official benchmark data (measured against massive, production codebases) is staggering:
| Metric | Performance Delta |
|---|---|
| Semantic Analysis Phase | Up to 376% faster |
| Compiler Initialization | Up to 488% faster |
| IDE Code Resolution Latency | Dramatically improved |
For Android engineers, K2's performance leap is transformative. Gradle incremental builds trigger the compiler frontend incessantly; K2's execution velocity yields massive build time reductions in enterprise Android projects.
III. Syntactic Sugar Deconstruction: Kotlin Through the Bytecode Lens
Equipped with compiler architectural knowledge, we can execute the most pragmatic skill: Deconstructing Syntactic Sugar. Every ergonomic feature in Kotlin maps to a highly specific JVM instruction sequence. The ability to mentally execute this "reverse-compilation" is the hallmark of a senior Kotlin engineer.
Within IntelliJ IDEA or Android Studio, navigate to Tools → Kotlin → Show Kotlin Bytecode, then click Decompile to generate equivalent Java code. This is the ultimate tool for verifying the physical reality of any Kotlin feature.
3.1 data class: The Boilerplate the Compiler Writes For You
The data class is the apex of Kotlin syntactic sugar:
data class Point(val x: Int, val y: Int)
Pushing this through the Kotlin Bytecode decompiler reveals the raw reality:
// Compiler-Synthesized (Java Equivalent)
public final class Point {
private final int x;
private final int y;
public Point(int x, int y) { this.x = x; this.y = y; }
public final int getX() { return x; }
public final int getY() { return y; }
// equals: Scans ONLY primary constructor parameters
@Override
public boolean equals(Object other) {
if (this == other) return true;
if (!(other instanceof Point)) return false;
Point o = (Point) other;
return x == o.x && y == o.y;
}
// hashCode: Hashes ONLY primary constructor parameters
@Override
public int hashCode() {
return 31 * x + y;
}
// toString: Essential for state debugging
@Override
public String toString() {
return "Point(x=" + x + ", y=" + y + ")";
}
// copy: Shallow clone permitting parameter mutation
public final Point copy(int x, int y) {
return new Point(x, y);
}
// componentN: Synthesizes the architecture for destructuring declarations
public final int component1() { return x; }
public final int component2() { return y; }
}
The compiler synthesizes exactly 7 methods. Crucially, only primary constructor parameters participate in the generation of equals/hashCode/toString/copy/componentN. Standard properties defined within the class body are violently ignored—this architectural constraint is a frequent source of critical bugs for junior engineers.
3.2 object Singletons: Language-Level Double-Checked Locking
object Database {
fun connect() { println("connecting...") }
}
Decompiled:
// Database.class
public final class Database {
// Singleton payload: static final, initialized thread-safely by the JVM ClassLoader
public static final Database INSTANCE;
static {
// Instantiated during the <clinit> (class initialization) phase
INSTANCE = new Database();
}
// Private constructor blocking external instantiation
private Database() {}
public final void connect() {
System.out.println("connecting...");
}
}
// Calling Site
Database.INSTANCE.connect();
Kotlin's object directly exploits the native thread-safety guarantees of the JVM class loading mechanism. The <clinit> block is mathematically guaranteed by the JVM specification to execute exactly once when the class is initially loaded, completely eliminating the need for explicit synchronization monitors. This architecture is both more concise and more secure than manual Double-Checked Locking (DCL) implementations.
3.3 companion object: Static Inner Classes + Proxy Routing
class MyClass {
companion object {
const val TAG = "MyClass"
fun newInstance(): MyClass = MyClass()
}
}
Decompiled:
// MyClass.class
public final class MyClass {
// Static final instance of the Companion
public static final MyClass.Companion Companion = new MyClass.Companion();
// const val is hoisted to a static final field on the host class (compile-time constant)
public static final String TAG = "MyClass";
// Static nested class Companion
public static final class Companion {
// Private constructor, access restricted to MyClass.Companion
private Companion() {}
public final MyClass newInstance() {
return new MyClass();
}
}
}
Invoking MyClass.newInstance() is physically identical to invoking MyClass.Companion.newInstance(). If you inject the @JvmStatic annotation, the compiler synthesizes a static proxy method directly onto MyClass, allowing Java to call MyClass.newInstance() natively—a mechanic explored heavily in the "Kotlin/Java Interoperability" article.
3.4 Extension Functions: The Static Method Camouflage
fun String.isPalindrome(): Boolean {
return this == this.reversed()
}
Decompiled:
// StringExtensionsKt.class (Classname derived from filename)
public final class StringExtensionsKt {
// The receiver object is violently injected as the first parameter
public static final boolean isPalindrome(String $this$isPalindrome) {
return $this$isPalindrome.equals(
new StringBuilder($this$isPalindrome).reverse().toString()
);
}
}
At the bytecode layer, extension functions are fundamentally, unalterably static methods. There is zero runtime "extension" of the class schema. The invocation str.isPalindrome() is compiled directly into StringExtensionsKt.isPalindrome(str). This precise architecture is why extension functions mathematically cannot override member functions and cannot access private class members—they physically reside entirely outside the class scope.
3.5 Lambdas: The Anonymous Class Reality
val square: (Int) -> Int = { x -> x * x }
In the absence of the inline modifier, the decompilation reveals:
// Compiler synthesizes an anonymous class for the Lambda
static final class Closure$square extends Lambda implements Function1<Integer, Integer> {
// Singleton instance (reusable if no external variables are captured)
static final Closure$square INSTANCE = new Closure$square();
@Override
public Integer invoke(Integer x) {
return x * x;
}
}
// Calling Site
Function1<Integer, Integer> square = Closure$square.INSTANCE;
Critical architectural detail: Lambdas that do not capture external variables are compiled as singletons (reusing the INSTANCE field). However, lambdas that do capture external context instantiate a brand new object on the heap for every single invocation. This is the physical origin of the "memory penalty" associated with closures.
When deploying inline functions, the Lambda's bytecode is violently ripped out and pasted directly into the calling site, annihilating the anonymous class entirely. This is why high-frequency collection operators like filter and map execute with extreme efficiency.
3.6 inline Functions: Bytecode "Cut-and-Paste"
inline fun measureTime(block: () -> Unit): Long {
val start = System.nanoTime()
block()
return System.nanoTime() - start
}
// Calling Site
val elapsed = measureTime { Thread.sleep(100) }
Post-inlining, the calling site's bytecode evaluates to:
// Compiler unrolls the function directly at the call site.
// Zero function call overhead, zero Lambda object allocation.
long start = System.nanoTime();
Thread.sleep(100); // ← The payload of 'block' is injected directly here
long elapsed = System.nanoTime() - start;
The performance yield of inline expansion is binary: It obliterates both the stack frame overhead of the function call and the heap allocation overhead of the Lambda object. For higher-order functions executing on critical hot-paths, this is a non-negotiable optimization.
3.7 when Expressions: The Compiler's "Smart Routing"
when is Kotlin's apex control flow structure. The compiler autonomously selects the optimal bytecode routing strategy based on the specific branch topology:
Strategy 1: Contiguous Integer Branches → tableswitch (O(1))
val desc = when (code) {
1 -> "One"
2 -> "Two"
3 -> "Three"
else -> "Other"
}
Compiles to the tableswitch instruction—a dense, contiguous jump address table. It utilizes the value of code as a direct array index to jump instantly, achieving O(1) time complexity.
tableswitch {
1: Label_1 // goto "One"
2: Label_2 // goto "Two"
3: Label_3 // goto "Three"
default: Label_default
}
Strategy 2: Sparse Integer Branches → lookupswitch (O(log n))
val desc = when (code) {
1 -> "Small"
100 -> "Medium"
9999 -> "Large"
else -> "Other"
}
The massive delta between values (1 to 9999) would create catastrophic memory bloat in a jump table. The compiler pivots to lookupswitch—maintaining an internally sorted key-value routing table and executing a binary search, yielding O(log n) time complexity.
Strategy 3: String Branches → hashCode + switch + equals
val result = when (name) {
"Alice" -> 1
"Bob" -> 2
else -> 0
}
The JVM bytecode specification does not support native switch routing on Strings. The compiler engineers a two-phase resolution architecture:
// Java Equivalent (Compiler Synthesized)
int result;
int h = name.hashCode(); // Phase 1: Compute Hash
switch (h) {
case 63268488: // "Alice".hashCode()
if (name.equals("Alice")) { result = 1; break; }
// Hash collision fallback: Fallthrough to default if equals() fails
result = 0; break;
case 2052: // "Bob".hashCode()
if (name.equals("Bob")) { result = 2; break; }
result = 0; break;
default:
result = 0;
}
Two phases: Deploy hashCode for O(1) jump positioning, then execute equals for cryptographic verification (preventing fatal routing errors via hash collisions).
Strategy 4: Type Evaluation Branches → instanceof + if-else chain
fun describe(obj: Any) = when (obj) {
is Int -> "Integer"
is String -> "String"
else -> "Other"
}
Type evaluation is structurally incompatible with switch architecture. It compiles down to a linear instanceof instruction sequence:
String result;
if (obj instanceof Integer) {
result = "Integer";
} else if (obj instanceof String) {
result = "String";
} else {
result = "Other";
}
IV. Multiplatform Targets: One Codebase, Four Destinies
Kotlin's Multiplatform architecture is predicated on one brilliant design decision: All backends consume the exact same IR. This allows the language to evolve uniformly across radically different execution environments.
4.1 Kotlin/JVM: The Most Mature Ecosystem
The JVM backend is the oldest and most heavily fortified. Kotlin IR is transpiled directly into .class bytecode files compliant with the JVM spec.
In Android engineering, these .class files are subsequently ingested by D8/R8 (Google's Dex compilers) and re-compiled into Dalvik bytecode (.dex) to execute on the ART runtime. Thus, Android code physically undergoes two distinct compilation phases: kotlinc → .class → D8/R8 → .dex.
4.2 Kotlin/Native: The Anti-VM
Kotlin/Native targets environments (iOS, embedded systems, desktop) that either prohibit or lack runtime virtual machines. Kotlin IR is offloaded directly to the LLVM toolchain:
Kotlin IR
↓ Kotlin/Native Compiler
LLVM IR
↓ LLVM Optimization + Target Code Generation
Native Machine Code (ARM64, x86_64, etc.)
LLVM is an industrial-grade compiler infrastructure responsible for hyper-granular optimizations like register allocation and instruction selection for specific CPU architectures. The final payload is a standalone native binary—no GC, no VM. However, memory management is still required (Kotlin/Native deploys reference counting coupled with a cycle collector).
Cross-module shared code is packaged into .klib files—essentially serialized archives of Kotlin IR, functioning as the Native ecosystem's equivalent of a .jar.
4.3 Kotlin/JS and Kotlin/Wasm: The Dual Web Vectors
| Architectural Feature | Kotlin/JS | Kotlin/Wasm |
|---|---|---|
| Output Format | JavaScript Source Code | WebAssembly Bytecode |
| Execution Engine | JS Engine (V8, SpiderMonkey) | Wasm Runtime (Requires WasmGC) |
| JS Interop | Seamless & Native | Requires JS binding layers |
| Performance Ceiling | Bottlenecked by JS Engine | Near-native (Structured memory) |
| Maturity | Stable | Rapidly Evolving |
Kotlin/Wasm deploys the WasmGC (WebAssembly Garbage Collection) proposal, allowing Wasm modules to operate within structured managed memory, bypassing the catastrophic complexity of manually managing object lifecycles within Wasm's linear memory model.
V. KSP vs KAPT: The Two Eras of Annotation Processing
Annotation processors dominate Android architecture: Room, Hilt, Glide, Moshi... nearly every foundational framework relies on annotation processing to synthesize boilerplate code. KAPT and KSP represent two vastly different eras of compiler interaction.
5.1 KAPT: A Bridge Engineered for the Java World
KAPT (Kotlin Annotation Processing Tool) was born as an architectural compromise. The Java ecosystem possessed massive, highly optimized annotation processors (built against the javax.annotation.processing.Processor interface), but these processors were completely blind to Kotlin code.
KAPT's brute-force solution: Transpile Kotlin code into Java Stubs prior to compilation.
Source (.kt)
↓ KAPT Pre-processing (Catastrophically Expensive)
Java Stub (.java) ← Hollowed out Java skeletons; structure retained, implementations purged
↓ javac + Annotation Processors
Synthesized Java Code
↓ kotlinc
Final Bytecode
What does a Java Stub look like? Assume this Kotlin class:
data class User(val name: String, val age: Int)
KAPT synthesizes this Stub for the Java processors to analyze:
// Auto-synthesized Java Stub (Structure intact, logic purged)
public final class User {
@NotNull private final String name;
private final int age;
public User(@NotNull String name, int age) { /* stub */ }
@NotNull public final String getName() { /* stub */ return null; }
public final int getAge() { /* stub */ return 0; }
// equals/hashCode/toString/copy method signatures...
}
Stub generation is KAPT's fatal performance flaw. It must execute before any annotation processing begins, requiring a complete semantic analysis of all Kotlin source files. Worse, this phase is highly prone to "full cache invalidation"—a minor modification often triggers a total regeneration of all Stubs.
5.2 KSP: Native Kotlin Comprehension
KSP (Kotlin Symbol Processing), engineered by Google, operates as a native compiler plugin API. It entirely bypasses Stub generation, operating directly against the compiler's resolved Symbols—the post-semantic analysis Kotlin AST nodes.
Kotlin Compilation (Frontend + FIR Analysis)
↓ Post-FIR analysis, exposes the API
KSP Processors ← directly ingest Kotlin Symbols (Semantic data of classes, funcs, params)
↓
Synthesized Code (.kt or .java)
↓
Re-inject into Main Compilation Pipeline
KSP possesses native, first-class comprehension of Kotlin features:
- The
suspendmodifier: KSP reads it natively; KAPT cannot see it within a Java Stub. - Nullability Annotations: KSP reads nullability directly from the Kotlin Type System.
data classgenerated members: KSP natively recognizes the syntheticcopyandcomponentNfunctions.
5.3 Performance Metrics: Quantifying the Delta
| Dimension | KAPT | KSP |
|---|---|---|
| Execution Architecture | Synthesize Java Stub → Java Processor | Direct consumption of Kotlin Symbols |
| Intermediate Payloads | Massive volume of Java Stub files | Zero |
| Build Velocity | Slow (Stub generation bottlenecks) | Blistering (Official data: Typically 2×+ faster) |
| Incremental Builds | Highly restricted (Stub-level granularity) | Hyper-granular (Symbol-level tracking) |
| Kotlin Feature Support | Crippled (Mapped via Java Stubs) | Absolute (Native Kotlin Semantics) |
| Maintenance Trajectory | ⚠️ Maintenance Mode (Feature freeze) | ✅ Active Development |
KAPT has officially entered maintenance mode. Both Google and JetBrains mandate the migration of annotation processors to KSP. Standard frameworks like Room, Hilt, and Moshi already supply native KSP implementations.
VI. IntelliJ / Android Studio: A Bytecode Tooling Guide
With the theoretical foundations established, the critical next step is integrating "Bytecode Deconstruction" into your daily debugging workflow.
6.1 Execution Path
Open any .kt file and navigate via the menu:
Tools → Kotlin → Show Kotlin Bytecode
The bytecode panel will render the raw JVM bytecode synthesized from the current file (in text format, utilizing ASM's Textifier output).
Click the Decompile button within the panel. The IDE will trigger the FernFlower decompiler, reversing the bytecode back into highly readable, equivalent Java code.
This is the ultimate source of truth. When confronted with uncertainty regarding a Kotlin feature's underlying mechanics, write three lines of code, open the bytecode pane, and observe reality. It is infinitely faster and more accurate than consulting documentation.
6.2 Verifying Architectural Unknowns
Unknown 1: How does the bytecode distinguish val and var?
// Syntax A
val a = 1
// Syntax B
var b = 1
Decompiled: The field backing val possesses the final modifier; var does not. This is how the compiler physically enforces immutability.
Unknown 2: How is string interpolation executed?
val msg = "Hello, $name! You are $age years old."
Decompiled equivalent:
String msg = "Hello, " + name + "! You are " + age + " years old.";
// The compiler may optimize this into a StringBuilder pipeline:
String msg = new StringBuilder("Hello, ")
.append(name)
.append("! You are ")
.append(age)
.append(" years old.")
.toString();
Unknown 3: How does the ?. safe call physically execute?
val length = str?.length
Java Equivalent:
Integer length = (str != null) ? str.length() : null;
Null safety is purely a compile-time illusion achieved by injecting standard null checks into the bytecode. Zero runtime reflection or custom wrapper frameworks are utilized.
VII. Synthesis: The Ultimate Implication
Reviewing the architectures analyzed across this article, a singular, uncompromising design philosophy emerges:
Kotlin's prime directive is to deliver massive language-level ergonomics with zero (or near-zero) runtime cost.
- The ergonomics of
data classare powered by compile-time synthesis; zero runtime cost. - The
inlinemodifier obliterates the object allocations demanded by high-order functions. - The
objectsingleton leverages the JVM classloader to guarantee thread safety; zero runtime synchronization locks. - The
?.safe call compiles into rawnullchecks; zero reflection overhead.
This trajectory is the exact opposite of Java's historical reliance on heavy runtime frameworks (AOP, Reflection, Dynamic Proxies) to achieve convenience. Kotlin's architecture dictates: Offload the complexity to the compiler, hand the conciseness to the engineer, and reserve maximum performance for the runtime engine.
The K2 compiler is the ultimate manifestation of this philosophy: Blistering compilation speeds, surgical type analysis, and absolute cross-platform behavioral parity. These enhancements require zero API changes and are invisible to the developer, yet they permanently elevate the ceiling of the entire engineering toolchain.
Understanding the compiler is not about manually writing bytecode. It is about achieving absolute clarity: When you write a line of Kotlin, you must know exactly what you are commanding the machine to do—and critically, what you are not doing.