Last Updated on 05/12/2020 by OTC
Posted by Evan_Hall
If you handed someone a blank sheet of paper and the only thing written on it was the page’s title, would they understand what the title meant? Would they have a clear idea of what the actual document might be about? If so, then congratulations! You just passed the Blank Sheet of Paper Test for page titles because your title was descriptive.
The Blank Sheet of Paper Test (BSoPT) is an idea Ian Lurie has talked about a lot over the years, and recently on his new website. It’s a test to see if what you’ve written has meaning to someone who has never encountered your brand or content before. In Ian’s words, “Will this text, written on a blank sheet of paper, make sense to a stranger?” The Blank Sheet of Paper Test is about clarity without context.
But what if we’re performing the BSoPT on a machine instead of a person? Does our thought experiment still apply? I think so. Machines can’t read—even sophisticated ones like Google and Bing. They can only guess at the meaning of our content, which makes the test especially relevant.
I have an alternative version of the BSoPT, but for machines: If all a machine could see is a list of words that appear in a document and how often, could it reasonably guess what the document is about?
The Blank Sheet of Paper Test for word frequency
If you handed someone a blank sheet of paper and the only thing written on it was this table of words and frequencies, could they guess what the article is about?
An article about sharpening a knife is a pretty good guess. The article I took this word frequency table from was a how-to guide for sharpening a kitchen knife.
What if the words “step” and “how” appeared in the table? Would the person reading be more confident this article is about sharpening knives, or less? Could they tell if this article is about sharpening kitchen knives or pocket knives?
If we can’t get a pretty good idea of what the article is about based on which words it uses, then it fails the BSoPT for word frequency.
Can we still use word frequency for BERT?
Earlier natural language processing (NLP) approaches employed by search engines used statistical analysis of word frequency and word co-occurrence to determine what a page is about. They ignored the order and part of speech of the words in our content, basically treating our pages like bags of words.
The tools we used to optimize for that kind of NLP compared the word frequency of our content against our competitors, and told us where the gaps in word usage were. Hypothetically, if we added those words to our content, we would rank higher, or at least help search engines understand our content better.
Those tools still exist: Market Muse, SEMRush, seobility, Ryte, and others have some sort of word frequency or TD-IDF gap analysis capability. I’ve been using a free word frequency tool called Online Text Comparator, and it works pretty well. Are they still useful now that search engines have advanced with NLP approaches like BERT? I think so, but it’s not as simple as more words = better rankings.
BERT is a lot more sophisticated than a bag-of-words approach. BERT looks at the order of words, part of speech, and any entities present in our content. It’s robust and can be trained to do many things including question answering and named entity recognition—definitely more advanced than basic word frequency.
However, BERT still needs to look at the words present on the page to function, and word frequency is a basic summary of that. Now, word location and part of speech matter more. We can’t just sprinkle the words we found in our gap analysis around the page.
Enhancing content with word frequency tools
To help make our content unambiguous to machines, we need to make it unambiguous to users. Reducing ambiguity in our writing is about choosing words that are specific to the topic we’re writing about. If our writing uses a lot of generic verbs, pronouns, and non-thematic adjectives, then not only is our content bland, it’s hard to understand.
Consider this extreme example of non-specific language:
“The trick to finding the right chef’s knife is finding a good balance of features, qualities and price. It should be made from metal strong enough to keep its edge for a decent amount of time. You should have a comfortable handle that won’t make you tired. You don’t need to spend a lot either. The home cook doesn’t need a fancy $350 Japanese knife.”
This copy isn’t great. It looks almost machine-generated. I can’t imagine a full article written like this would pass the BSoPT for word frequency.
Here’s what the word frequency table looks like with some stop words removed:
Now suppose we used a word frequency tool on a few pages that are ranking well for “how to pick a chef’s knife” and found that these parts of speech were being used fairly often:
Entities: blade, steel, fatigue, damascus steel, santoku, Shun (brand)
Verbs: grip, chopping
Adjectives: perfect, hard, high-carbon
Incorporating these words into our copy would yield text that’s significantly better:
“The trick to finding the perfect chef’s knife is getting the right balance of features, qualities, and price. The blade should be made from steel hard enough to keep a sharp edge after repeated use. You should have an ergonomic handle that you can grip comfortably to prevent fatigue from extending chopping. You don’t need to spend a lot, either. The home cook doesn’t need a $350 high-carbon damascus steel santoku from Shun.”
This upgraded text will be easier for machines to classify, and better for users to read. It’s also just good writing to use words relevant to your topic.
Looking toward the future of NLP
Is improving our content with the Blank Sheet of Paper Test optimizing for BERT or other NLP algorithms? No, I don’t think so. I don’t think there is a special set of words we can add to our content to magically rank higher through exploiting BERT. I see this as a way to ensure our content is understood clearly by both users and machines.
I anticipate that we’re getting pretty close to the point where the idea of optimizing for NLP will be considered absurd. Maybe in 10 years, writing for users and writing for machines will be the same thing because of how far the technology has advanced. But even then, we’ll still have to make sure our content makes sense. And the Blank Sheet of Paper Test will still be a great place to start.
Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!