Kotlin 2.3 RC Adds Unused-Value Checks and Better Interop

Kotlin 2.3 RC Adds Unused-Value Checks and Better Interop

Anand Naidu has spent years shipping Kotlin across the JVM, Native, JS, and now Wasm, often acting as the connective tissue between platform constraints and developer experience. In this conversation, he unpacks what Kotlin 2.3.0-RC changes in day-to-day engineering: catching dropped results with a new checker, stabilizing language constructs that unlock cleaner code, sharpening context-sensitive resolution, aligning Swift interop with native expectations, targeting Java 25, and smoothing cross-platform builds from Gradle to Wasm. We explore how these updates translate into fewer defects, simpler interop, clearer APIs, and safer refactors—complete with migration playbooks, diagnostics, and the small gotchas that only show up in real projects.

Kotlin 2.3.0-RC adds an experimental checker for unused return values. Where did you see dropped results cause real bugs, and how would you set up a project to surface them? Walk through a concrete example, false-positive patterns, and metrics you’d track after enabling it.

The most painful bugs I’ve seen from dropped results came from validation and caching. A function would return a sanitized or cached instance, but callers would keep using the original object, quietly bypassing invariants. With the new checker, I flip it on at the module level and run it in CI with “warnings as errors” only on canary branches. A concrete case:val normalized = userInput.normalize() // returned String was discarded beforeprocess(normalized)We previously had:userInput.normalize()process(userInput) // subtle bug because normalize()’s result wasn’t usedThe checker calls that out since the function returns something not Unit or Nothing and its value isn’t consumed. False positives usually show up in builder-like APIs where methods mutate in place but also return the receiver for chaining. I annotate these with documentation and, if needed, adjust them to return Unit to signal “fire-and-forget.” I track three things after enabling: the count of new warnings per module, time-to-fix (from detection to commit), and the number of test failures linked to these fixes. Even without hard numbers, the pattern is consistent: we find issues quickly on code paths that looked fine in reviews but were logically off by one call.

Two features moved from beta to stable: nested type aliases and data flow exhaustiveness for when expressions. How did these change your code structure, and what refactors did they replace? Share a before/after snippet, migration steps, and any performance or readability metrics you observed.

Nested type aliases let us group domain-centric type names close to the owning type, which keeps the public surface tidy. Before, we parked aliases in a top-level Types.kt and hunted imports. After stabilization, we colocate them:Before:typealias UserId = Stringclass UserService { /* … / }After:class User {typealias Id = String}class UserService {fun load(id: User.Id) { / … */ }}For data flow exhaustiveness on when, we dropped redundant else branches where smart casts already covered all paths. Before:when (state) {is Loading -> showSpinner()is Loaded -> showData(state.data)else -> log(“ignored”)}After:when (state) {is Loading -> showSpinner()is Loaded -> showData(state.data)}Migration steps were simple: move aliases into their owners, fix imports, and remove unnecessary else branches where the compiler confirms exhaustiveness. Readability improved because intent lives with the type; reviewers stopped bouncing between files. Build times and runtime performance looked unchanged in our pipelines; the win is structural clarity.

Context-sensitive resolution now considers sealed and enclosing supertypes and warns on ambiguity with type operators and equalities. Can you describe a case where this helped, and another where it complicated overload resolution? Show your diagnostic steps, the warning text, and how you resolved ambiguity.

It helped in a sealed hierarchy where helper extensions lived on the sealed parent. Previously, the IDE sometimes missed the right extension without explicit imports. With sealed supertypes in scope, calling analyze() on members of a sealed family just worked—fewer hints, cleaner code. The complication came when we had two extensions with the same name: one on the sealed parent, one on an enclosing supertype. A call like if (x is SomeType && x.hasFeature()) triggered a warning along the lines of: “Ambiguous call due to context-sensitive resolution.” My diagnostics: reproduce with minimal code, run the compiler to see the exact site, then add explicit receiver qualification or a type ascription:(parent as SealedParent).hasFeature()orval y: SealedParent = xy.hasFeature()We also renamed one extension to disambiguate, which made the warning disappear without losing expressiveness.

Kotlin/Native’s Swift export now maps Kotlin enums to native Swift enums and supports variadic parameters. How did this affect API design and call sites in Swift? Share a small interop example, migration gotchas, and any ABIs or binary-size changes you measured.

Mapping Kotlin enums to true Swift enums is a big ergonomics win. On the Swift side, you get switch exhaustiveness, pattern matching, and code completion that feels native. For example:Kotlin:enum class Status { Loading, Loaded, Error }fun render(status: Status) { /* … / }@ExportForSwiftfun logTags(vararg tags: String) { / … */ }Swift:render(status: .loading)logTags(“ui”, “list”, “coldStart”)Migration gotchif you previously relied on the class-like behavior of exported enums (methods or stored-like semantics), move that logic into functions or companion-style helpers, since Swift enums are value-like. As for ABI or binary size, the RC notes don’t cite growth, and in my checks I didn’t observe a size change attributable to enums or varargs. The more impactful change is Swift source clarity and fewer adapter shims.

Kotlin 2.3.0-RC supports generating Java 25 bytecode. What Java 25 features or JDK behaviors did you leverage, and how did they influence performance or startup? Outline your build setup, verification steps (javap, runtime checks), and rollout safeguards for mixed JDK environments.

Targeting Java 25 lets us align with the latest standard runtime. In build.gradle.kts, we set Kotlin’s jvmTarget to match Java 25 bytecode and ensure the toolchain uses that JDK. Verification-wise, I run javap -v on a representative class to confirm the major version aligns with Java 25, and I sanity-check startup paths under that JDK. For mixed environments, we publish a variant compiled to Java 25 and keep a compatibility build for older deployments, gating selection via Gradle variants and CI matrix jobs. Rollout safeguards include running the app under multiple JDKs in CI, watching for deprecation messages during startup, and holding back production on a canary ring until logs and metrics are clean.

The RC is compatible with Gradle 7.6.3 through 9.0.0, but newer versions may show deprecation warnings. How do you decide which Gradle version to pin, and how do you handle those warnings? Share your build scan process, performance benchmarks, and upgrade playbook.

I pin to the newest version inside the stated compatibility window that yields clean builds in our scan. The process: run a Build Scan, catalog deprecation warnings, and verify plugins don’t rely on features that “may not work.” If warnings appear, I either update the plugins or add suppression only with a ticket to remove it. We benchmark configuration time and incremental build time across a small and a large module. The upgrade playbook is straightforward: update wrapper, run scans, fix warnings, validate caching and test determinism, then roll out via a staged pipeline.

For Kotlin/Wasm, KClass.qualifiedName is on by default with no binary size increase thanks to compiler optimizations. Where does runtime FQN help you most, and how did you verify no size regression? Walk through measurement steps, tooling, and a concrete reflection-based use case.

Runtime FQNs are gold for logging, analytics, and feature flagging. In Wasm, we tag events by KClass.qualifiedName to route behavior without shipping a custom registry. To verify no size regression, I built before and after with the same optimization settings and compared the Wasm artifact sizes. The RC notes credit compiler optimizations for keeping size flat, and in my checks the binaries matched within normal variance. A practical use case: a plugin loader keyed by FQN for strategy selection—cleaner than hard-coded enums and safer than stringly-typed tags.

The wasmWasi target now enables the new WebAssembly exception handling proposal by default. How did this change your error-handling model compared to prior workarounds? Provide a code path you simplified, your testing strategy across runtimes, and any performance observations.

Before this, we leaned on result wrappers or sentinel values to avoid unwinding costs. With exceptions enabled by default on wasmWasi, we returned to idiomatic try/catch and deleted adapter code. A simplified path:Before:val r = compute().takeIf { it.ok } ?: return Error(“fail”)After:try { compute(); onSuccess() } catch (e: Throwable) { onError(e) }Testing strategy: run the same test suite in multiple Wasm runtimes to ensure consistent propagation and stack traces. Performance-wise, I didn’t see regressions; the bigger win was deleting branches that previously obscured the happy path.

Kotlin/JS can now export suspend functions directly via @JsExport. How did this remove boilerplate in your JS/TS interop, and what calling patterns changed? Show a before/after call from TypeScript, your error propagation approach, and how you typed the returned Promise.

This change eliminates the “bridge” layer that manually wrapped suspend calls into Promises. Now the compiler does it for us. TypeScript calls go from a custom wrapper to a first-class Promise:Before (TS):await bridge.fetchData()After (TS):await fetchData() // exported suspendErrors translate to rejected Promises, so we rely on try/catch with async/await or .catch. In d.ts, the signature surfaces as a Promise-returning function, which means our TS code reads and type-checks cleanly without handwritten typings. It’s less glue, fewer files, and easier reviews.

The RC landed on November 18, with stable expected next month or January 2026, and builds are on GitHub. How do you evaluate RC stability before adopting? Outline your canary strategy, test matrix (JDKs, Gradle versions, targets), and rollback criteria with real examples.

I treat RCs as opt-in previews behind canaries. We wire nightly builds to pull the RC from GitHub, then run the full test matrix: JDKs including the newest, Gradle versions within the compatibility band (7.6.3 through 9.0.0), and targets spanning JVM, Native, JS, and Wasm. If anything flakes, we log it and keep the RC confined to pre-release branches. Rollback is simply flipping the toolchain version in the build logic and re-running the matrix. We only promote to main when CI is green across the board and canary deploys finish without deprecation or runtime warnings.

Context-sensitive resolution can now make some cases ambiguous and the compiler warns accordingly. How do you refactor to avoid ambiguity without losing expressiveness? Share a step-by-step resolution plan, examples of better type hints, and any lints you enforce.

My plan is incremental: reproduce the warning, isolate the smallest snippet, then add intent. That might mean specifying a receiver type, adding a local val with an explicit type, or inlining a generic argument. For example:val s: SealedParent = xs.compute()orval result = compute(input)We also lint for ambiguous calls and flag extension overloads on both a sealed parent and its supertypes. If the overloading is accidental, we consolidate into one well-named function and delete the duplicate to avoid future drift.

With improved Swift interop and @JsExport for suspend, how do you keep API parity across Swift, JS/TS, and JVM while staying idiomatic on each platform? Describe your shared contract design, platform-specific shims, versioning scheme, and lessons from real consumer feedback.

We define a minimal, stable contract in common code and let each platform present idiomatic sugar. For Swift, enums are now native and variadics map naturally; for JS/TS, suspend functions become Promises; on JVM, we keep standard Kotlin coroutines. Platform-specific shims adapt naming and error models to local expectations without changing the shared semantics. We version the contract conservatively and surface platform notes in the release text so consumers know what changed. Feedback has been positive: fewer adapter utilities in app code and more confidence that the same behavior shows up across all targets.

The unused return value checker ignores Unit and Nothing. How do you design APIs to signal “must-use” results versus fire-and-forget calls? Give naming patterns, annotations or contracts you rely on, and metrics that showed fewer logic bugs after adoption.

I use names to set expectations: verbNoun for mutating Unit-returning calls (applyConfig), nouns or past-tense for value-returning functions (normalizedUser). If a call is meant to be fire-and-forget, I ensure it returns Unit rather than echoing the receiver. For “must-use,” I return a distinct type and document it, so the checker will warn if it’s ignored. After adoption, the clearest signal wasn’t a number; it was code review tempo—fewer nitpicks about “use the returned value” and fewer test failures tied to ignoring results. The checker helps institutionalize that discipline.

After stabilizing nested type aliases, how do you balance readability and type safety in large codebases? Share concrete alias hierarchies that worked, ones that confused teammates, and a checklist for when to introduce or remove an alias.

What works: aliases that are truly domain names, housed next to the owning type.class Order {typealias Id = Stringtypealias Token = String}fun fetch(id: Order.Id, token: Order.Token)This reads well and avoids mixing identifiers. What confused people: deeply nested aliases chaining across multiple owners—at that point, it’s indirection, not clarity. My checklist: introduce an alias only if it communicates domain meaning, colocate it with its owner, and avoid alias-of-alias chains. Remove an alias if reviewers need to jump files to understand it or if it duplicates an existing domain type.

For teams mixing JVM, Native, JS, and Wasm, which 2.3.0-RC changes had the biggest ROI? Rank your top three with rationale, include onboarding time saved or defect rates reduced, and walk us through one migration end-to-end with timelines and checkpoints.

My top three:

  1. @JsExport for suspend: deletes interop boilerplate and makes TS consumption straightforward.
  2. Swift enums and variadics: APIs feel native, which reduces misunderstanding and support churn.
  3. Unused return value checker: catches logic bugs that tests sometimes miss.Onboarding got faster because new contributors don’t have to learn bespoke bridges for JS/TS or Swift. Defect rates related to ignored results dropped because the compiler now flags them. A migration I ran end-to-end: enable @JsExport for suspend on a small JS module, regenerate d.ts, update TS calls, and run tests. Then expand to larger modules, update docs, and watch CI for regressions. The checkpoints were simple: green builds, clean type-checking in TS, and stable runtime logs.

Do you have any advice for our readers?

Treat this RC as a chance to pay down interop and correctness debt. Flip on the unused return checker in a canary, move your aliases closer to their owners, and try exporting one suspend function to JS and one enum to Swift to test the waters. Keep your Gradle pinned inside the compatibility window, and run a broad test matrix—including Wasm—before you promote. You’ll come out with cleaner APIs, clearer intent, and fewer surprises when the stable release lands.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later