SaaS Writing Consultation & Production

SaaS Writing Consultation & Production

ARS9 Editing: A shortcut to valuable, objective revision suggestions

Ryan Farley

Mittwoch, 24. April 2024

Earn the privilege of telling someone how to improve their writing by filtering all your feedback through a standardized scoring system.

Most people edit written text dozens of times a day. Sometimes, it’s small, like tweaking an email before hitting Send. Other times, it’s substantial, like commenting on a boss’s long-winded blog post. Whatever the case, an editing framework is the easiest way to provide clear, consistent, and constructive edits that enhance the original text.

Strangely, there seems to be a shortage of editing techniques, methodologies, and heuristics compared to those for writing.

Google shows 7.6 billion results for the query “writing tips” versus 418 million for “editing tips” (18x difference). That doesn’t appear due to query vagueness, either (you can write code and edit video, after all). “Blog writing tips” generates 976 million results versus 134 million for “blog editing tips” (7x difference). And you can validate these numbers even further with a keyword tool like Ahrefs:

Side by side screenshots of Ahrefs keyword reports for 'blog editing tips' and 'blog writing tips'

I’d argue there’s less demand for editing advice because people confuse editing with rewriting. Someone asks for your feedback on a social media post or a message to a frustrated customer, and you respond by writing your own take. You apply writing tips during the editing process. 

Rewriting is not the same as editing.

There’s nothing wrong with rewriting someone else’s work “on the fly” in the same way that there’s nothing wrong with walking into the gym without a routine. It’s fine for short bursts. And it’s better than doing nothing. Until it’s not.
 

A screenshot of ad hoc edits from me that contradict the article's thesis as well as my own previous suggestions

Editors tend to come out of the gates swinging, rapid-firing suggestions and corrections, only to inevitably realize that they’ve lost the thread. They end up recommending changes that contradict earlier edits, referencing previous comments that are nowhere to be found, changing the original thesis, and generally pulling the writer down their own unstructured, unhelpful rabbit holes. 

Haphazard editing is especially problematic when editing, or being edited by, someone with whom you don’t have a working rapport (e.g. new freelancers, clients, students, etc.). 

So, after several years of collaborative editing, I’ve honed a methodology for creating clear, consistent, and constructive feedback with minimal cognitive load. It takes only a few minutes to learn, and I have a stock one-paragraph explainer that I paste at the top of all documents when working with a new writer. 

Possibly the most valuable benefit of this approach is that it significantly reduces the chances of a writer feeling personally attacked and, therefore, less receptive to my suggestions. 

How I apply the ARS9 editing framework

The framework I use is represented as a three-digit number placed in front of any explanation of an edit. Each digit lets me indicate the severity of an issue for a specific category (inspired by Ad Whittingham's How much do you care system), without any unnecessary hedging or wordy preamble to the edit itself. 

ARS is an acronym for the three categories of edits I believe are most important (Accuracy, Relevance, and Style). Nine is a reference to the 0-9 score that I assign to each category when providing an edit (0 means “no problem” while 9 indicates a significant revision is needed). 

Scores can be applied to words, sentences, or paragraphs and work like this: I provide a 0-9 score for each of the ARS categories and combine them into a single number. 999 would mean that whatever I've highlighted has critical issues across all three categories. 100, 010, or 001 would mean the smallest possible problem in only one category.

Scoring the Accuracy category

The first 0-9 slot scores text based on how verifiable or factually correct the statement is. 

  • Example: “ARS9 is the first editing framework in the whole friggin’ universe.”

  • Comment: “900 Unverifiable” 

Would providing the text comment without the score accomplish the same thing? 100%. ARS9 isn’t particularly useful for one-dimensional edits. But it does force you to steer away from lazy suggestions and evaluate a writer’s words across multiple categories.

Scoring the Relevance category

The middle 0-9 number indicates how relevant a passage is to the thesis of the entire document.

  • Example: “ARS9 is the first editing framework in the whole friggin’ universe.”

  • Comment: “920 Unverifiable. I don’t think age correlates to usefulness.”

If publishing a false claim is the worst thing a writer can do, second place goes to directionless drivel (some SEOs might disagree). Is the example sentence above wildly irrelevant to this article’s thesis (italicised in the opening paragraph)?* Not necessarily. I may not have even noticed that this statement has a tiny relevance problem if it wasn’t also factually inaccurate. 

(Note: I italicize my thesis when publishing and highlight other writers’ theses when editing. This keeps everyone accountable and focused on a single, specific argument.)

So, we noticed one issue in the example passage (accuracy), which forced us to consider another potential stumbling block (relevance), and possibly a third.

Scoring the Style category

The third slot is reserved for commenting on style. 

  • Example: “ARS9 is the first editing framework in the whole friggin’ universe.”

  • Comment: “927 Unverifiable. I don’t think age correlates to usefulness. Bombastic and tonally out of sync with the rest of the article”

Style suggestions are where many editors expend the most effort. I’d argue that writers and editors should worry about this category least. 

My favorite Neil Gaiman quote is, “When people tell you something's wrong or doesn't work for them, they are almost always right. When they tell you exactly what they think is wrong and how to fix it, they are almost always wrong.” 

In my one-paragraph explainer of ARS9, I write that a non-zero number in the third slot means the highlighted language doesn’t work for me. That could be because I don’t know the writer’s target audience well enough, I’m unfamiliar with popular vernacular, or I’m just having a bad day, and the tone doesn't jive with my current mood. But style suggestions should never be brushed aside, either. 

It’s the writer’s responsibility to consider whether edits reflect individual preferences or those of a larger group. Ultimately, if the language is factually correct and furthers an argument worth debating, inconsistent or questionable style is more distraction than dealbreaker.

What started as a quick fact check became a thoughtful and objective comment on relevancy and style. One that, in my opinion, is significantly improved by including the numerical score. 

Why I use ARS9 scores when standalone comments would suffice

Including the three-digit score accomplishes two things. First, it crams more information into fewer characters. I can replace imprecise explanations of severity with a single character. Qualifying statements like “Not a critical issue, but this feels a little…” or “Kind of a nitpicky thing but…” can be represented with a 1 or a 2. Zeroes act as shorthand for statements like “this is factually correct but…” or “this is really well written but…” so I can get to the meat of the issue.

Second, ARS9 puts guardrails on my suggestions, forcing me to filter comments through a rubric. Even after 10 years of writing professionally, it’s still tempting to provide imprecise blanket statements like “You should rewrite this” when what I really mean is “This is correct and relevant but poorly written.” Guardrails force me to slow down. I have to consider passages more carefully. I could simply run through the rubric in my head, without writing out the score, but I'm confident that I'd eventually slip more and more often, shooting from the hip without sticking to the system.

With those two things in mind, anyone could tweak the framework in any number of ways. Maybe you want to make the style slot 0-5 in deference to its diminished importance. Or you want to use a special character to indicate which scores relate to which written explanations. Or, for longer and more complex writing, you might append a letter to the end of a score to give it an identifier (e.g. “see comment AA”).

I’m a proponent of the Rule of Three, so I don’t think adding score categories is a good idea. But there are countless other modifications you could make without sacrificing the system’s simplicity, information density, and objectivity.

Editing frameworks benefit both the writer and editor

I would never argue that my ARS9 technique is perfect for every editor or use case. I would contend, however, that everyone should use a structured system when providing critical feedback to other writers. It will save you time and make you a more precise editor.

Your boss, new client, fellow freelancer, or whoever you’re working with isn’t interested in low-effort rewrites. They want clear, consistent, and constructive feedback. And an editing framework is the quickest and easiest way to become their favorite collaborator.


Image Credit: Header photo by Wonderlane on Unsplash

Let's write your software's story, together.

©2024 Pith and Pip LLC

Let's write your software's story, together.

©2024 Pith and Pip LLC