Agent-Computer Interface

concept
ai-agentstool-designacihciinterface-design

The Agent-Computer Interface (ACI) is the counterpart to the Human-Computer Interface (HCI), applied to how AI agents interact with tools, APIs, and execution environments. The concept, introduced in Building Effective Agents, argues that tool design for agents deserves the same investment and rigor as UI design for humans.

Core insight

Humans invest enormous effort in HCI — user testing, iteration, accessibility, affordances. Agent tool design should receive equivalent investment. A tool that is ambiguous, poorly documented, or awkwardly formatted will cause the same kinds of errors that a confusing UI causes for humans.

Design principles

Format selection

Not all equivalent formats are equally easy for a model to produce:

  • Diffs vs. full rewrites — diffs require the model to predict chunk header line counts before writing code. Full rewrites avoid this counting overhead.
  • Code in JSON vs. markdown — JSON requires escaping newlines and quotes. Markdown code blocks are native to model training data.
  • Structured output — keep formats close to what the model has seen in its training distribution.

General rule: avoid formats that require the model to maintain accurate counts, perform string escaping, or plan far ahead before committing to output.

Give tokens to think

The model needs space to reason before producing output. Tool formats that force immediate commitment (e.g., the first token determines a structural choice) cause more errors than formats that allow the model to plan.

Documentation as interface

Tool descriptions function as the model’s UI. Practical requirements:

  • Include example usage, edge cases, and input format requirements
  • Draw clear boundaries between similar tools
  • Name parameters to make their purpose obvious
  • Write descriptions as you would a docstring for a junior developer

Poka-yoke for tools

Borrow from manufacturing: design tools so mistakes are structurally impossible. The canonical example from the article: an agent tool that accepted relative file paths broke when the agent changed directories. Switching to absolute-path-only eliminated the entire error class.

Other applications:

  • Constrain enum parameters instead of accepting free text
  • Validate inputs before execution rather than returning cryptic errors
  • Make the default behavior the correct behavior

Test and iterate

Run many example inputs through the tool. Observe what mistakes the model makes. Iterate on descriptions, parameter names, and formats. This is empirical engineering, not a priori design.

The SWE-bench lesson

The Anthropic team reports spending more time optimizing tools than the overall prompt when building their SWE-bench agent. This inverts the common assumption that prompt engineering is the primary lever — for agentic systems, tool engineering dominates.

Connections

  • The execute(name, input) -> string interface in Brain-Hands Decoupling is an ACI — simple, uniform, hard to misuse
  • Hermes Agent’s 40+ tools with typed schemas and dispatch loop are a large-scale ACI implementation
  • Agent Exec Policy adds a safety layer on top of the ACI — controlling which tool calls are approved
  • The poka-yoke principle connects to Robustness Control — designing systems that resist failure modes rather than detecting them after the fact
  • The emphasis on absolute paths over relative paths mirrors standard software engineering practice and is enforced by tools like Claude Code