“OMG! Grammarly is using more AI and everything just got more complex. WHYYYYY?”
If you are not familiar with Grammarly, it used to be a valuable editorial tool. It caught most errors, and when you are editing copious amounts of content, you need a little help. I began using Grammarly’s paid subscription when I shifted into news writing in 2019, which requires a different level of precision than technical or fiction work. It was a lifesaver and well worth the $88 annual price tag, which has since soared to $144. I am a better writer for it.
My wonderful, efficient tool has turned toward the tide of the masses and now uses AI not only to identify errors, but to make suggestions well beyond grammatical correctness.
AI writing suggestions are taking over. I cannot even finish a thought in an email or Word document without my sentences being completed, often incorrectly. I spend at least half my time erasing words I clicked too quickly on, only to find them sitting there on the page, saying conversation when I meant convention. I am sick of Copilot asking to compose my emails – unless of course it can now correctly divine everything I need to write.
Grammarly and tools like it now have built-in, layered-in generative AI. That means they are no longer just flagging grammar and clarity issues, but actively suggesting rewrites, tone changes, and “improvements” that can alter meaning, flatten voice, and even trip plagiarism or AI-detection tools.
Doesn’t this blur the line between editing and authorship? How are developing writers supposed to find their own voice if a machine is constantly suggesting their voice is not right, valid, or meaningful? As a small example, my Word grammar tool flagged at least five items in this very document as errors that categorically are not. How would a young writer know that, when they have been taught to trust the machine over their own learning?
Further, AI writing aids are now incorporating what can only be described as “tone governance.” They have moved from correcting language to subtly influence worldview, sentiment, and framing, particularly around “inclusive,” “positive,” and “softened” phrasing. For journalists, that is a problem. Facts are facts, quotes are historical records, and sadly, life is not always positive, inclusive, or soft. In fact, it rarely is.
We now see AI moralizing language choices instead of checking accuracy. It smooths conflict where conflict is the story and reframes it as something to be “improved.” Tools like this are quietly editing intent, assuming there is a correct emotional position for writing.
I worry that over time, this influence could conflict with personal, moral, and religious beliefs. Tools like Grammarly used to be mechanical. They corrected grammar based on rules. What is happening now are suggestions based on what the system believes is better, kinder, safer, or more appropriate.






















Comment
Comments