The concept of ‘centaur writing’ comes to us from the world of centaur chess, which is a variant of chess competition in which players use computational assistance during play.
Often this means players will sit with a board between them, but will also have access to one or more computers, each running sophisticated chess software that provides insights and ideas the players can use or choose not to use.
Wielding such tools in conventional chess competition (at times via allegedly elaborate and covert means) is a big no-no, but when both players have the same advantage it’s considered to be a legitimate approach to the game: neither player enjoying a one-up on the other, the strategy of the game shifting to incorporate each player’s augmentation (similar to how warfare changed with the introduction of horses, hence the name).
Centaur writing is similar in that it refers to a writer who is augmented by computational tools, rather than being replaced by them. The human writer’s judgement, understanding, and experience remains fundamental to their output, even if some of their choices are influenced by their tools.
Most modern writers already benefit from a slew of technological augmentations, like spelling- and grammar-checkers built into most of the software we use. Such tools may have been considered only with raised eyebrows in previous generations—shouldn’t a writer be able to moderate their own spelling and grammar, after all? Isn’t such knowledge fundamental to being a skilled and practiced author?—but today we take them for granted, and few would consider their use to be an unfair advantage.
I wonder which of the new tools that’re becoming mainstream available, today, will take a similar path toward acceptance within the writing community?
Many writing tools and even entire operating systems are beginning to offer the benefits of generative AI to the everyday, non-tech-savvy user, and that means more of the words we’re producing are at least partially scrivened by software, not writers. They summarize, they rework, they polish, they make suggestions we can choose to accept (or dismiss) wholesale.
How long until these tools begin to auto-accept these ‘corrections,’ similar to how they auto-correct our spelling, today?
I’ve experimented with all sorts of AI writing tools, and while some have been useful for the brainstorming component of my process, I’ve found the ones that generate actual text to be not worthless, but definitely generic, beige, and lifeless. Exactly the opposite of what you hope for if you write professionally and either enjoy or find satisfaction in the process of writing.
These tools are often oriented toward a centralized, inoffensive norm, and that makes sense if you want to produce words that communicate information in the most accessible and universalized manner possible. If you’re writing a grant proposal or resume or a corporate email, it may be useful to sand down some of the rough edges and strip away some of the personality in what’s being transmitted.
This ensures the information encoded in the language being used is more accessible to more people, and for folks who simply don’t read or write very often (which is a great many of us) this is maybe the most straightforward possible path to dispersing necessary knowledge and know-how.
The path is less clear for other types of writing, though, as while I expect these tools will get better and better at producing intelligible prose, I also tend to think that a writer’s flaws and oddities are their most useful attributes.
Intentional deviations from the ‘correct’ way of doing things often serve as the ambergris in a writer’s perfume: they’re the unpalatable ingredients that serve as a basis for far more sophisticated and enjoyable outputs.
Interesting. Do you use LLM's at all for the writing of your newsletters? I'd be curious to know if you ever use it to generate ideas or just occasionally to alter your writing for clarity etc...?