Back to blog

What Your First Real User Teaches You (That Beta Testers Never Can)

The night our first real user silently ran five chapters through a broken pipeline, and what we fixed in real time while he kept coming back.

by Cosmin · · 4 min read

What your first real user teaches you — a founder's account of launch night

In short: Our first real user ran five chapters through the platform on day one. The pipeline was broken in ways our own testing never caught. He got his results every time. We fixed everything in real time. He came back the next morning for more.

There is a specific kind of silence that follows a product launch. Not the silence of nothing happening, but the silence of not knowing if anything is happening. No alerts, no complaints, no feedback. Just a dashboard and a question mark.

Then, without warning, a real user appears. Not a friend. Not a tester you briefed in advance. A stranger, in another country, who found the product and decided to trust it with their manuscript.

That is a different kind of pressure entirely.

The chapter that took eight hours

Our first real user, a writer working through a multi-book fantasy series, submitted his first chapter on a Tuesday morning. Chapter four of his series. 5,000 words of carefully written fiction, uploaded with a series bible attached.

The analysis completed. Eventually. After eight hours.

He did not complain. He did not email support. He came back and submitted chapter five.

We did not know the pipeline was broken. Our internal tests had passed. The edge function was running, the agents were producing output, results were landing in the database. On paper, everything worked.

What we did not know: every agent step in the pipeline was hitting a silent authentication failure on its internal self-call, recovering via a two-minute stale sweep, then proceeding. Ten agents. Two minutes of dead wait between each one. The math is not flattering.

For a user sitting on the other side of an animated progress bar, it just looked slow. He adapted. He submitted a chapter, went about his day, came back to read the report.

What broke and why

The root cause was a JWT verification setting. When the edge function triggered its own next step, a self-call to advance the pipeline, the gateway was rejecting the internal request with a 401 before it arrived. The recovery mechanism kicked in every two minutes to rescue stalled jobs. It worked. It was just two minutes slower than it needed to be, per agent.

The second issue was a guard condition in the job-claiming logic. When a job was freshly claimed and moved from queued to running, it hit a status check that evaluated whether to proceed, but the check did not account for the fact that it had just done the claiming itself. It returned a "running-job-awaiting-continuation" signal and stopped. The stale sweep picked it up two minutes later.

Neither of these failures produced an error the user could see. They produced latency.

The fix and the moment everything changed

We deployed two changes: disabled JWT verification on internal self-calls, and added a guard bypass for freshly-claimed jobs. Both were single-line fixes. Combined, they eliminated approximately two minutes of dead time per agent.

Ten agents. Two minutes each. Twenty minutes of latency, gone.

The next chapter our user submitted, chapter eight, completed in four minutes.

He had not said a word about the previous eight hours. He just kept using the product. When chapter eight came back in four minutes, he ran chapter nine the same evening.

That is what a real user teaches you that a beta tester cannot. A beta tester tells you what broke. A real user shows you how much they will absorb before they leave, and how quickly they recognise when something improves, even if they never articulate it.

The email that finally landed

There was a second failure running in parallel: completion emails were not sending. The domain verification for our sending domain had failed silently twelve days earlier. Every email we attempted to send was rejected before it left the provider.

The user was never notified when his chapters finished. He was refreshing the dashboard himself.

Fixing this required adding three DNS records to our domain: a DKIM key, an MX record for bounce handling, and an SPF record. The DNS management UI was silently dropping long TXT records without error. We ended up using the provider's REST API directly to add the records via PowerShell, confirmed the values against the provider's expected format, and watched the domain flip to Verified within sixty seconds of the final record propagating.

The next analysis that completed sent a proper email. Dark background, amber button, the library image from the site. It looked like the product.

What patient users are telling you

If your first real user submits five chapters through a broken pipeline without filing a single complaint, one of two things is true: either they do not care, or they care enough to keep going despite the friction.

The way you tell the difference is whether they come back.

He came back.

That is not a story about a resilient user. That is a story about a product that was useful enough, even broken, that the value proposition survived the delivery failure. The agents were producing real findings. The reports were actionable. The core of what we built was working, even when the scaffolding around it was not.

That is the thing beta testing cannot replicate. You can stress-test infrastructure, but you cannot simulate the moment when someone who has nothing invested in your success decides to keep going anyway.

What we shipped by morning

By the time our user submitted his ninth chapter, we had:

  • Cut analysis time from ~90 minutes to ~4 minutes per chapter
  • Fixed silent authentication failures on internal pipeline calls
  • Verified the sending domain and restored completion email delivery
  • Added a word count cap (15,000 words for single chapters) to prevent misuse of the single-chapter path
  • Redesigned the completion email to match the product's visual identity

None of this was on a sprint plan. All of it was triggered by one real user who showed up and quietly used the thing.

The only metric that mattered that night

He spent his last free credit the following morning.

We are watching Stripe.

Related tools

Want to see this in action? Upload a chapter and watch the Series Continuity agent review it against your bible.

Editorial Conductor

Ask me anything

Hi! I'm the Editorial Conductor assistant. Ask me anything — how the agents work, what a credit gets you, how series bibles work, or anything else about the product.

Powered by Claude · Editorial Conductor AI

What Your First Real User Teaches You (That Beta Testers Never Can) | Editorial Conductor