the-composable-frontend-architecture

Modular Interaction Layer (MIL)

By Everett Quebral
Picture of the author
Published on

Introduction

Imagine a user pressing “play” on a song—whether on their phone, car dashboard, or desktop app. Despite different platforms and UI components, the user's intent is the same. The Modular Interaction Layer (MIL) ensures that this intent is handled uniformly, contextually, and consistently across all surfaces.

The Modular Interaction Layer (MIL) represents the interface surface of the Composable Frontend Architecture—where user intent meets system behavior.

While CEL (logic), DIM (structure), APC (presentation), ECN (event flow), UIF (interaction abstraction), and CSEE (execution context) enable runtime composability, MIL defines how user interactions are orchestrated across modular, distributed UI components.

MIL ensures that interactions—clicks, gestures, voice commands, keyboard input, and other intents—trigger consistent, traceable behavior regardless of where components are composed or rendered. It provides a bridge between intent and behavior in a multi-surface, multi-runtime world.


Why We Need MIL

Modern UI Pain Points MIL Solves

  • Component Isolation without Behavioral Fragmentation: In composable architectures, interaction logic often becomes fragmented or duplicated across micro frontends.
  • Cross-Surface Inconsistency: Mobile gestures may behave differently than web clicks; embedded surfaces may lack keyboard controls.
  • Global Behavior without Global Scope: Features like undo, shared navigation, or dynamic state transitions are often implemented in monolithic layers.
  • Hard-to-Trace Interactions: Debugging UI logic becomes hard when input flows aren't declarative or traceable across boundaries.

MIL introduces a composable, testable, and declarative layer for binding user input to contextual behavior—within and across modular UI boundaries.


Historical Context and Prior Art

1. MVC/MVP: Controller-Centric Input Mapping

Legacy patterns like MVC or MVP coupled interaction tightly with domain controllers. As component boundaries emerged, these input handlers became either too global or too fragmented.

2. Redux and Centralized State

Interaction was routed through central dispatchers or reducers. While powerful, this approach introduced coupling, latency, and state bloating when applied to modular UIs.

3. Event Bus and Pub/Sub Systems

Loose coupling via event buses helped decouple interaction, but lacked traceability and type safety in complex hierarchies.

4. Custom Hooks and Signals

Modern systems like React Hooks or Solid.js signals expose local interaction logic, but lack standard patterns for cross-component interaction at scale.

MIL builds on these ideas—favoring declarative interaction binding, context-aware scoping, and cross-boundary orchestration of interactions.


Architecture Overview

To illustrate how MIL works in real-world applications, consider a user clicking a 'Save' button in a document editor:

  • The Intent Emitter captures the click event.
  • The Contextual Resolver checks which document is active and whether the user has write permissions.
  • The Interaction Handler invokes the logic to persist the document.
  • The Feedback Channel displays a toast notification and updates the button state.

This layered approach abstracts interaction flows into modular, testable segments. Each component in the MIL pipeline contributes to predictable, composable, and traceable interaction design. MIL enables distributed teams to define local interaction behavior while still supporting global UX consistency.

MIL is composed of:

  1. Intent Emitters – Abstracted user actions (click, gesture, voice)
  2. Contextual Resolvers – Determine what should respond to the intent (via context, scope, or surface)
  3. Interaction Handlers – Modular, portable logic blocks that execute the behavior
  4. Feedback Channels – Visual, auditory, or haptic feedback confirming the interaction occurred
+---------------+       +---------------------+       +-------------------+       +------------------+
| User Action   | --->  | Intent Emitter      | --->  | Interaction Logic | --->  | Feedback Channel |
| (click, etc.) |       | (onPress, onVoice)  |       | (resolve + execute)|      | (UI response)    |
+---------------+       +---------------------+       +-------------------+       +------------------+

Implementation Examples

Each of these examples aligns with the MIL architecture:

  • Intent Emitters are click handlers or custom events.
  • Contextual Resolvers are powered by hooks or scoped registrations.
  • Interaction Handlers are central logic modules registered with MIL.
  • Feedback Channels are controlled through UI state and responses.

React: Modular Click Handling

1 function SaveButton() {
2   const onSave = useInteractionHandler('saveAction');
3   return <button onClick={onSave}>Save</button>;
4 }

Explanation:

  • Line 1: Declares a functional component named SaveButton.
  • Line 2: Uses a custom hook useInteractionHandler to resolve the interaction logic for the 'saveAction' intent.
  • Line 3: Binds the resolved function to the button's onClick event, maintaining separation between UI and logic.
  • useInteractionHandler binds an intent ("saveAction") to the behavior registered in MIL.
  • MIL internally resolves the scope (e.g., user context, device) and invokes the proper handler.

TypeScript: Registering Interaction Logic

1 registerInteraction('saveAction', async (ctx) => {
2   await saveDocument(ctx.documentId);
3   notify('Saved!');
4 });

Explanation:

  • Line 1: Registers a logic handler for the intent saveAction.
  • Line 2: Executes the logic (e.g., persisting data).
  • Line 3: Provides user feedback via a notification.
  • Logic is defined once and mapped to an intent.
  • The handler can be swapped, composed, or redirected at runtime.

Web Components

1 this.addEventListener('press', () => handleInteraction('saveAction'));

Explanation:

  • Listens for a custom press event and routes it to the interaction handler registered for the saveAction intent.
  • This enables decoupling between platform-specific event emitters and shared interaction logic.
  • MIL supports decoupled, platform-agnostic interfaces for interaction.

Patterns and Anti-Patterns

📌 Real-World Example:

  • Good Pattern: In Adobe Express, each document tool registers its own handlers only when visible. When switching tools, old handlers are unregistered.
  • Anti-Pattern: In a large retail app, a single global click handler attempted to manage all actions, causing regressions and race conditions across components.

Patterns

  • Scoped Registrations: Handlers bound only in their UI domain or layout scope.
  • Intent as ID: Every interaction is identified by a string or token, decoupling it from implementation.
  • Cross-Surface Reusability: Same interaction logic on mobile, web, or TV interfaces.

🚫 Anti-Patterns

  • Binding to DOM events directly in business logic
  • Hardcoded handlers in deeply nested components
  • One handler for multiple unrelated behaviors

Real-World Case Studies

Spotify – Unified Playback Controls

Problem: Spotify’s playback logic was fragmented across multiple platforms—web player, mobile apps, car UIs, and smart speakers. Each had slightly different behaviors, causing inconsistencies in seeking, pausing, and state syncing.

Challenge: Developers found it difficult to scale features (like 'jump to chorus') across all platforms while maintaining consistent behavior. Platform-specific hacks increased complexity.

Solution:

  • Introduced a MIL pattern where all user interaction was abstracted into high-level intent tokens (e.g., playTrack, skipForward, seekTo).
  • Central interaction handlers were injected contextually based on platform and device type.
  • Used a shared analytics layer to trace interaction origin and impact across surfaces.

How They Did It:

  • Created an interactionRegistry scoped by device type (mobile, embedded, desktop).
  • Routed all UI actions through an internal emitIntent('playTrack') API.
  • Wrapped platform-specific implementations in a feedback abstraction (e.g., visual play toggle, haptic tap).

Result:

  • Reduced interaction bugs by 60% across surfaces
  • Improved engineering onboarding time (new controls rolled out in 2× less time)
  • Unified playback telemetry enabled smarter UX experiments

📘 Insight: By decoupling UI components from input logic, Spotify dramatically improved cross-platform parity and traceability.

Adobe Express – Action Palette

Problem: Adobe Express needed a way to let users quickly trigger design actions—like aligning objects, changing layout, or applying filters—across various device types.

Challenge: Embedding logic directly into UI controls limited reusability and made accessibility hard to implement consistently.

Solution:

  • Introduced a MIL-based command system with scoped handlers.
  • Each interaction (e.g., alignLeft, applyFilter) was treated as a token resolved at runtime.
  • Floating UI (Command Palette) dispatched these tokens regardless of where the action originated (touch, keyboard, voice).

How They Did It:

  • Used React Context to expose interaction registration API per document scope.
  • Enabled simulation of intent triggers via automated tests (e.g., emitIntent('duplicateElement') in Cypress).
  • Composed interaction logic via middleware (e.g., telemetry, feature flag gating).

Result:

  • 3× improvement in discoverability of design tools
  • Automated interaction test coverage reached 85%
  • Team reduced interaction-related bugs in cross-device flows by ~45%

📘 Reference: Adobe Spectrum’s approach to interaction APIs helped inform the MIL structure.


Glossary

TermDescription
Intent EmitterAbstract source of user input (click, gesture, voice, etc.)
Interaction HandlerLogic that processes and responds to intent
Context ResolverLayer that decides what handler should be triggered
Feedback ChannelUI, sound, or haptic response confirming user action
Scope BindingRestricting interaction logic to a section or view context

Summary

MIL is the orchestrator of user intent in a composable UI system. It ensures interactions are decoupled, scoped, traceable, and portable—allowing behavior to scale as UIs become more distributed.

✅ Next Steps

  • Audit your UI for hardcoded input logic
  • Define reusable intent tokens (e.g., 'submitForm', 'togglePlayback')
  • Create scoped interaction handlers
  • Implement feedback channels for each interaction
  • Simulate and test intent flows end-to-end

Stay Tuned

Want to become a Next.js pro?
The best articles, links and news related to web development delivered once a week to your inbox.