← All posts

Your Book, Your Name: Why AI Transparency Is About to Matter a Lot

By Mark Hankin

Let me paint you a picture. You've spent eight months writing a novel. You've agonised over every chapter, rewritten the ending three times, and finally — finally — you're ready to publish. You upload to Amazon KDP, hit submit, and get a rejection notice asking you to confirm your book wasn't generated by AI.

You know it wasn't. You wrote every word. But somewhere along the way you used an AI tool to tighten a few paragraphs, and now you're sitting there thinking: can I actually prove that?

This is the question that's quietly keeping self-published authors up at night in 2026. And it's a fair one.

What the distributors are actually saying

Amazon KDP, IngramSpark, and Lulu have all updated their content policies around AI in the last year. The details differ, but the direction is unmistakable.

Amazon KDP now requires you to disclose whether AI tools were used in generating content. They draw a line between AI-assisted (you wrote it, AI helped refine it) and AI-generated (AI produced it, you mostly watched). Both are technically permitted, but disclosure is mandatory, and Amazon will pull books and suspend accounts if they think you've been dishonest about it. They've already done this. Repeatedly.

IngramSpark has gone further — they prohibit books created using AI or automated processes. That's aimed squarely at the content-farm operations churning out hundreds of titles a month, not at you polishing your prose. But the language is broad enough to make careful authors nervous.

Lulu reserves the right to suspend accounts that misuse AI to flood the platform. Again, targeted at scale abuse, but the enforcement mechanisms don't always distinguish between the bad actors and the legitimate authors caught in the crossfire.

Here's the thing none of these policies spell out: the burden of proof is shifting. It used to be enough to say "I wrote this." Increasingly, you might need to show it.

The problem most writers don't see coming

This is what actually concerns me (and I say this as someone who builds writing tools for a living, so I've spent rather too long thinking about it).

Right now, millions of authors are using AI writing tools — grammar checkers, prose polishers, brainstorming assistants, structural analysers. These tools are genuinely helpful. Using them is completely legitimate. Nobody sensible thinks running your manuscript through a grammar checker makes you less of an author.

But very few of these authors have any record of how they used AI. No audit trail. No data. No evidence of the creative decisions they made along the way. Just a finished manuscript and their word.

Today, that's fine. Tomorrow — and this is the bit I keep coming back to — it might not be.

The trajectory is clear. AI-generated content is flooding publishing platforms at industrial scale. Amazon has already started flagging and removing books. As enforcement tightens (and it will), the author who can demonstrate their creative process is in a fundamentally different position than the author who can only say "trust me." One has evidence. The other has a problem.

It's a bit like being hauled into an interview room in Line of Duty. "Can anyone corroborate your account?" The person with evidence walks out. The person without it gets six more episodes of suspicion. Having no record of your authorship is the literary equivalent of having no alibi — you might be completely innocent, but good luck proving it when the alternative explanation is so much simpler.

What "AI-assisted" actually looks like in practice

Let's be specific about this, because the spectrum is wider than most people realise.

At one end: you use a spell-checker. Nobody questions this. It's been standard practice since Microsoft Word first put those angry red squiggles under your text. (I still remember the indignity of Word suggesting "colour" was misspelled. American software.)

In the middle: you use an AI tool to brainstorm plot alternatives, get feedback on pacing, or polish a paragraph that isn't landing. The AI suggests; you decide. Your voice, your creative vision, your editorial judgment — all human. The AI is a tool in the process, like a thesaurus or a brutally honest friend who's read a lot of books.

At the other end: someone types a prompt and publishes whatever the AI produces. That's not writing. That's ordering takeaway and calling yourself a chef.

Most real writers fall somewhere in the middle, and that's perfectly fine. The challenge is being able to show where you fall on that spectrum if anyone ever asks. Which brings me to what we've built.

How gowrite approaches this differently

When I was building gowrite, I kept thinking about this problem. Not the "AI is scary" version of it — the practical, boring, "what happens when someone asks you to prove you wrote your book" version.

gowrite's AI assistant works as a tuning tool. You select a passage you've written, and the AI helps you refine it — adjusting tone, reading level, or prose style based on parameters you control. It's your text, shaped by your decisions, with AI as a calibration instrument rather than a ghostwriter.

But here's what makes the approach different: gowrite tracks the process, not just the output.

Every time you use the AI assistant, gowrite records what happened. Not your creative thoughts or your personal data — just the practical facts. What text went in. What the AI suggested. Whether you accepted the suggestion, tweaked it, or dismissed it entirely. What percentage of your manuscript has been through the AI tuning process.

This creates something I haven't seen in any other writing tool: an authorship audit trail.

When you export or publish through gowrite, you can see your AI involvement score — a transparent, colour-coded metric showing how much of your finished text passed through AI assistance. For most authors using the tool as intended, this number is reassuringly low. It demonstrates what's true: you wrote your book, and AI helped you polish it. There's the evidence, right there.

Why this matters at the point of no return

When you're ready to export your manuscript — whether as a PDF for print, an EPUB for ebook, or a file for upload to KDP — gowrite shows you a clear summary before you commit.

If your AI involvement is minimal (and for most authors it will be), you see a simple green indicator. That's your evidence. That's what you point to if anyone ever questions your authorship.

If the involvement is higher — say you ran every chapter through multiple rounds of AI tuning and accepted most of the suggestions — gowrite flags that honestly. Not to stop you publishing, but to make sure you know where you stand before you make representations to a distributor. Better to find out from your writing tool than from an Amazon takedown notice, I'd argue.

The bigger picture (briefly, because I know you've got a book to write)

I built gowrite on a simple belief: in a world increasingly flooded with AI-generated content, human authorship is more valuable than ever. But "valuable" only means something if it's verifiable.

The publishing industry is moving toward accountability around AI. Distributors are tightening policies. Readers are growing sceptical of content that feels synthetic. Awards and literary prizes are starting to address AI involvement. This trend will accelerate, and the authors who've been tracking their process will be glad they did.

Your story deserves to be told. It also deserves to be unmistakably, provably, yours.

Your Book, Your Name: Why AI Transparency Is About to Matter a Lot — gowrite