In the publishing industry, a “beta reader” is a human being who reads your unpublished book and provides feedback, not unlike a book review. Beta readers are subjective, and can provide only their opinion, but they can be very useful in gauging how an average reader will respond to your book.
The industry also offers software services such as Grammarly. These are computer programs that analyze your text and compare it with a known standard. They are useful for identifying obvious typos and errors of basic grammar. They are not very useful in providing a subjective or emotional response to your book.
Recently a valued client of mine sent her manuscript to a “beta reader” service. The resulting report had the veneer of personalization, but the work had clearly been done by software.
The “beta reader” said that he or she (the report was unsigned) had submitted the 38,000-word manuscript to a website touting the Flesch Kinkaid system. This is a computer program that “reads” the text, analyzes its content (like Grammarly does), and compares the results to an algorithm. It does not consider the content, nor does it consider whether you’re writing fiction or nonfiction, or even poetry. It then provides a “readability score” of one to 100. The higher the score, the more “readable” your text is. They provide the example of “The Old Man and the Sea” by Ernest Hemingway as having a very high readability score.
The report provided by Flesch Kinkaid listed various words that were “overused,” such as adverbs and prepositions. It was obvious that the software had scanned the manuscript and counted the occurrence of various types of words and punctuation marks, and then flagged the ones that numerically exceeded the algorithm.
In the report, I was particularly intrigued by the admonitions about font enhancements. After a long explanation about why WRITING IN ALL CAPS is bad (which we never do!), the report said:
“Here are some samples from your book where you use font enhancements unnecessarily, followed by the recommended change:”
The examples were a dozen perfectly innocuous examples of the use of italics.
The report given to my client said this about M-dashes:
“You tend to overuse m-dashes. M-dashes ( — ) are often used in place of parentheses or commas, or to indicate abrupt pauses or interruptions in dialog or action. They are also used to indicate dramatic emphasis. You sometimes incorrectly use n-dashes ( – ) in place of m-dashes. N-dashes should be used for joining words or numbers in a range (eg: 2–5 minutes; January–March). You sometimes use n-dashes in place of hyphens.”
I searched my client’s 38,000-word text and found exactly two places where an M-dash was used incorrectly. As far as “overuse,” that’s purely subjective.
I began to wonder whether there was any human involvement in this so-called beta read! So then I decided to go to the Flesch Kinkaid website and do a few tests myself. They say they can analyze websites, and rate them for readability. For the test, I gave FK articles from two highly technical peer-reviewed websites: “The Journal of Public Health” and “Science Robotics.” These are dense, difficult texts. To my surprise, FK gave both pages high scores for readability – 46.1 and 45, which according to FK were suitable for kids in 9th grade.
Makes you wonder if a highly technical article with a score of 10 would be “readable.”
My point is that these software tools are useful for detecting obvious typos, such as the aforementioned M-dashes. But beta reading? No, that’s not what they do. For that, you must have a real human being.
- Thomas Hauck is a professional freelance ghostwriter, book developer, and occasional beta reader for his global clients.