GrammarPhile Blog

Editing AI-Generated Text

Posted by Sara Richmond   May 9, 2024 9:45:00 AM

There’s a common joke among writers: “It’s easier to start over than to copyedit.”

Nobody laughs at the joke. They mostly just nod their heads, as if to say, “It’s funny cause it’s true.” Copyediting poor writing is a little like trying to clean a hoarded house without removing the hoard beforehand.

The interesting bit about that joke is that it came about in response to human writing, and not even necessarily poor writing. But it applies to AI-generated copy as well.

As a team of writers, proofreaders, editors, copywriters, and copyeditors who’ve seen the good, the bad, and the dreadful across nearly every industry, organization size, and type of content imaginable, we feel compelled to reveal a few downsides.

The Downsides of AI-Generated Text

We understand the impetus for using AI-generated copy. It’s fast; it’s cheap; it’s easy. It doesn’t talk back.

But these aren’t hypothetical downsides. They’re not reactive, “We’re afraid for our jobs” types. They’re not projected insecurity (i.e., “A machine can write better than moi? Better blow it to pieces.”). These are downsides we’ve already encountered, many times.

For example, consider a proposal one of our beloved clients recently submitted. We are intimately familiar with their voice, their products, their style guide, their content types, and their business model. It was immediately clear that the copy was AI-generated. It was also clear that to move it from what ranged between awkward and nonsensical to succinct and logical, we couldn’t just proofread it. We had to copyedit it…heavily.

Upon review, we made an estimated 5 times as many necessary edits as normal for jobs of the same length from this client. Oof.

Read More

Topics: artificial intelligence, AI

Why We’re Largely Ignoring AI-Generated Writing

Posted by Sara Richmond   Dec 22, 2022 10:15:00 AM

 

If you haven’t heard the news that AI is positioned to kick all human writers to the curb after scoffing at their turtle-like slowness, then you may be living under a rock (and for that, we salute you).

Every Tom, Mick, and Sherry is writing an opinion piece, post, or pop song about the recently released ChatGPT, a chatbot with the tagline “Optimizing Language Models for Dialogue.” It claims to be able to “answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.”

It’s certainly not the first AI-writing generator, and it won’t be the last. So what’s different? Why the hoopla and apocalyptic predictions?

In a sentence: Because compared to many of its predecessors and peers, ChatGPT produces intelligible, lightning-fast writing, even based on loose prompts. Poor human writing (the kind content mills produce) doesn’t stand a chance.

That’s exactly why we’re not concerned about AI-writing or writers’ jobs (or ours). We deal in quality—the intuitive, agile, creative kind that machines will never be able to fully emulate.

When it comes down to it, we can’t even agree with the delivery promise of AI-writing: quick, adaptable, readable writing. It’s like giving a candle as a housewarming gift to an intimate old friend. It checks the box, but there’s no rapport, no true depth, and no personalization. (This analogy doesn’t even tackle the moral ambiguity of how AI-generated copy sources material without accreditation.)

Read More

Topics: artificial intelligence, AI

Subscribe to Email Updates

Sign up for our emails!

Sign Up

Search Our Blog

Recent Posts

Posts by Topic

see all