Product walkthrough
How multi-agent AI editorial review works
Editorial Conductor is built around a simple idea: treat manuscript review like a staged editorial room, not a single generic chat prompt. You supply the text (and optional canon context); the product runs specialist passes in a fixed order so continuity and structure are not drowned out by late-stage polish opinions.
Step-by-step walkthrough
The workspace is meant to mirror how authors actually revise: ingest the manuscript, anchor continuity, receive structured feedback in passes you can trust, then fold that feedback into Word or Scrivener (or your writer tools) without losing track of what came from which lens.
1. Upload
You begin on the Analyze screen. Upload a Microsoft Word (.docx) file—either a single chapter or a longer manuscript file, depending on whether you want a scoped chapter review or a broader pipeline pass. The app indexes what you uploaded so later screens can tie findings back to your book title, chapter boundaries, and storage-safe paths (your content stays in your workspace under normal operation).
2. Series bible setup (optional but powerful)
For series work—or any book where canon matters—you can attach a series bible: your names, timelines, rules, relationships, and anything you consider non-negotiable for continuity. You can also add prose style guidance when you want agents to judge voice against a stated target rather than inferring tone only from the chapter at hand.
If you skip the bible, the continuity stage still runs, but it focuses on internal consistency within the uploaded text rather than checking against external canon you have not supplied. Many writers run their first pass without a bible to get a fast structural read, then attach canon before a second pass—both workflows are valid.
3. Agent run (four stages, nine specialists)
When you start analysis, jobs move through Pre-Flight, Craft, Depth, and Market+Merit, in that order. Each stage contains specialist agents with a narrow mandate so scores and findings stay legible: continuity before structure, structure before line polish, craft before thematic depth, and market-facing reads last—when the chapter is already coherent enough for those judgments to mean something.
You can deep-dive each specialist on the Agents directory; the runtime order matches the staging philosophy described there.
4. Reading your report
When processing completes, you land in the dashboard-style results experience: chapter-level panels, per-agent scores, severity-tagged findings, and summaries you can skim before you decide what to tackle first. The intent is triage—see what blocks the manuscript versus what is polish—rather than a wall of undifferentiated prose.
Findings are labeled so you can filter mentally (and in the UI where available) by Critical, Major, Minor, and Suggestion, aligned with how working editors classify risk. Agent scores use a bounded numeric scale so you can compare chapters over time without confusing different rubrics across agents—see “What a report looks like” below for detail.
5. Merging into revision
Editorial Conductor does not replace your manuscript file automatically. Instead, you take findings back into your drafting environment: accept or reject fixes, reorder scenes based on structural notes, adjust continuity against your bible, or export snippets when your workflow prefers offline revision. Writer-oriented features in the product (where enabled) exist to shorten the gap between commentary and rewritten prose; the authoritative draft remains yours.
What each stage produces
Each stage emits its own bundle of structured outputs—scores, summaries, severity-tagged observations, and agent highlights—scoped to what that stage is allowed to judge. Together they form your “report”; separately they remain interpretable when you want to zoom in on only structure or only continuity. Below is what you should expect to take away from each stage in practice.
Stage 1 — Pre-Flight
Protects continuity and established story facts before the deeper craft passes begin.
- Continuity findings — bible-backed or internal-consistency contradictions, with cited passages and explicit “what conflicts with what” framing.
- Severity-ranked risk list — which slips are credibility breakers versus polish-level nits before deeper passes spend attention.
- Continuity agent score + summary — a fast read on whether the chapter can safely move into structural diagnosis.
- Shared artifact shape — every agent still returns findings with severity labels, short summaries, numeric scores where applicable, and highlights—so results stay comparable across stages.
Stage 2 — Craft
Examines structure, line work, copy, and voice at chapter level.
- Structural notes — scene function, pacing, tension, chapter ending strength, and fit inside the wider arc (from the structural agent).
- Line-level craft — rhythm, clarity, repetition, and energy (line editor); mechanical correctness without flattening voice (copy editor).
- Voice guardrails — drift, register shifts, and moments the prose stops sounding like your manuscript (voice consistency).
- Per-agent scores & highlights — separate scores so you can tell whether the chapter is weak structurally versus weak at the sentence layer.
- Shared artifact shape — every agent still returns findings with severity labels, short summaries, numeric scores where applicable, and highlights—so results stay comparable across stages.
Stage 3 — Depth
Tests the emotional and thematic layers underneath the visible plot.
- Thematic analysis — motifs, symbolic through-lines, and whether the chapter advances the book's ideas coherently.
- Emotional truth checks — motivation, escalation, interior honesty, and earned versus rushed feeling.
- Depth-stage scores — distinct from craft scores: you may have clean prose that still misses emotional landing points—this stage surfaces that mismatch.
- Shared artifact shape — every agent still returns findings with severity labels, short summaries, numeric scores where applicable, and highlights—so results stay comparable across stages.
Stage 4 — Market + Merit
Measures the work against both submission realities and literary ambition.
- Market-facing read — positioning, hook strength, distinctiveness, and submission readiness signals.
- Literary ambition pass — originality, memorability, and what might lift the manuscript from competent to exceptional.
- Late-stage summaries — because these lenses run last, their findings assume earlier continuity and craft issues have already been surfaced upstream.
- Shared artifact shape — every agent still returns findings with severity labels, short summaries, numeric scores where applicable, and highlights—so results stay comparable across stages.
Stage 5 — Synthesis
Synthesises all nine specialist reports into one conflict-free, prioritised editorial brief.
- Shared artifact shape — every agent still returns findings with severity labels, short summaries, numeric scores where applicable, and highlights—so results stay comparable across stages.
What a report looks like
At the chapter level you should expect a repeatable layout rather than free-form chat: agents return JSON-shaped outputs internally, which the app surfaces as readable sections. That keeps scores and findings comparable when you rerun after a rewrite.
- Scores — each agent proposes a numeric score on a bounded scale (for example 1–10 where higher means closer to publication-ready for that lens). Use scores to prioritize which chapters demand another structural pass versus a light polish pass.
- Severity labels — Critical items are framed as blockers before submission (direct contradictions, structural breaks, voice collapse). Major issues are noticeable to careful readers. Minor issues are meaningful improvements that do not necessarily stop the line. Suggestions are optional polish—stylistic alternatives or small lifts that may still improve reader experience.
- Findings list — each row ties an issue to text (quoted or precisely located), states the problem plainly, and suggests a fix so you can resolve or dismiss intentionally.
- Summaries & highlights — quick orientation before you drill into findings; highlights counterbalance critique so revision plans stay constructive.
How long does it take?
Runtime depends on manuscript length, how many chapters are in scope, provider availability, and queue load. A single chapter pass often finishes within a few minutes of wall clock under normal conditions—think “brew tea and stretch,” not overnight batch jobs.
A full-book pipeline scales with chapter count and word volume: multi-chapter jobs queue units of work sequentially so staging order stays coherent. If you need predictability for a deadline, prefer chapter batches or schedule longer manuscripts when you can step away while jobs progress.
You can monitor active jobs from the dashboard; stalled or failed units surface there so you can retry or reach support rather than guessing whether the pass is still alive.
FAQ
Can I upload a full manuscript?
Yes—when your workflow calls for it, you can upload a longer .docx so the pipeline can operate across multiple chapters or the whole draft, depending on the analysis mode you select. Very large files take longer to ingest and process; treat full-manuscript runs as batch work and plan revision accordingly.
What file formats do you accept?
Manuscript upload is oriented around .docx (modern Word). If your draft lives in another tool, export to Word for the cleanest extraction path before you run agents.
What if a chapter or job fails?
Individual analysis jobs can error when content cannot be parsed, external services throttle, or transient faults occur. The dashboard shows job status; you can retry a run when the product exposes a retry path, and you can contact support through the nav if a unit remains stuck after a reasonable wait. How credits apply to partial or failed runs is spelled out on Pricing so you always know what a queued job costs before you scale up batch work.
Do I need a series bible?
No—standalone novels and discovery drafts run fine without one. Add a bible when explicit canon checks matter; that is when continuity findings become highest leverage against hidden contradictions across books.
See a live example
The demo walks sample chapters through the same pipeline—side-by-side examples so you can compare continuity behavior, scores, and findings without uploading your own draft first.
Billing, accounts, and data handling are covered on Pricing, Privacy, and Terms. Questions? Use Support in the nav or the address on About.