Generative Plane

Imago: Building a Terminal Workflow for Conversational Writing

Imago: Building a Terminal Workflow for Conversational Writing

The Friction Point

I was already writing blog posts using a conversational loop with claude and claude code, but the chat interface became a bottleneck. Every time I needed to review or revise a section, I had to flip between a text editor and the chat window. This wasn’t just a minor annoyance—it created a cognitive loop: copy text, switch contexts, paste, wait for a response, repeat. The draft would disappear from view during generative steps, breaking the flow. The problem wasn’t the tools themselves, but the lack of a single-window workflow that could handle both iterative dialogue and sequential editing.

The Terminal as a State Machine

The solution was to build a TUI that maps directly to my process. Imago uses Bubble Tea to create distinct modes:

  • Interview phase: Axon tools drive conversational drafting.
  • Review phase: Markdown sections are edited in-place, one at a time.
  • Publishing: Synd API integration lives in the same interface.

The TUI is structured as a state machine tailored for writing workflows, with the terminal serving as a single source of truth. This avoids the fragmentation of browser-based tools. For example, tui.New() initializes the interface with clear transitions between modes, while internal/session handles file persistence without exposing it to the user.

Local LLMs and Predictable Latency

I chose Ollama for local inference because it avoids API rate limits, reduces token costs by using lightweight models for routine tasks, and keeps all data private on my machine until publication. The axon-talk Ollama adapter lets me switch models via environment variables—no code changes needed. This matters: when writing, I want to offload simple tasks to small models (like llama3:8b) while reserving larger models for critical thinking. The local setup also ensures no conversation fragments escape to external services, and all interactions remain available for future model training.

The main.go file shows how this works:

client, err := ollama.NewClientFromEnvironment()

It’s a small line, but it decouples the tool from any single model or service while maintaining full control over data flow.

Modular Tooling Without Coupling

Imago’s toolset is assembled from discrete components:

  • axon-tool definitions for search, publishing, and infrastructure queries.
  • Environment-bound services like Synd, SearXNG, and axon-memo.
  • Filesystem-based session storage.

This modularity is enforced by the Go module structure. The CLI layer (cmd/imago) has no direct dependency on the TUI or session logic. Everything is injected at runtime via tools.All(cfg), which builds a capabilities matrix from environment variables and config files.

Memory That Lasts Beyond a Session

The axon-memo dependency tracks editorial voice across sessions using the imago agent slug. This includes maintaining context between interview phases and revision cycles, as well as supporting the development of the "journalist" persona over time. By retaining state between sessions, the agent enables Imago to refine its voice through persistent memory, avoiding the "reset" problem of stateless chat interfaces. When pausing a draft, the retained state allows the persona to evolve cohesively, ensuring continuity in tone and perspective.

What It Is, and Isn’t

Imago isn’t a general-purpose editor. It’s a narrowly scoped solution for a specific workflow—one that required a terminal-native tool to match how I actually work. The codebase reflects this: strict separation between internal/tui, internal/session, and tools/ ensures no component holds more responsibility than necessary.

The result is a workflow that eliminates manual text shuffling, keeps the draft visible during generative steps, and uses conventional Go patterns (env config, dependency injection) for maintainability. It’s not perfect, but it works for what it was built to do.

What’s Next?

Future iterations may explore tighter markdown tooling integration or expanded memory capabilities. But the core principle remains: the interface should align with the cognitive task, not the other way around. For now, Imago does what it was designed to do—collapse a fractured workflow into a single window, with no more context switching than necessary.