AI Is Not the Almighty SERP Overlord You Think It Is, AKA:
I Spent Over a Half-Hour Trying to Get Gemini to Write Me a Funny Heder For This Story And Here We Are So If You Think AI Can Write You Stellar Content That Will Magically Make Your SERPs Shoot Up Higher Than Sergey Brin’s Net Worth Then I Guess You’re Funnier Than Gemini And Your SERPs Are Doomed
OK, so I hyperbolize(ish). But so does AI.
First, let me back up a few steps.
Firstliest, when I was first tasked with writing this here blog, I posed the question, Is There Ever a Responsible & Ethical Way to AI in Content Syndication? This is also the heder Gemini failed to make “punnier.” Some options it gave me, for your “entertainment,” included:
- Is There Ever an Ethical AI-dvantage to Content Syndication?
- To AI or Not to AI: The Ethical Content Conundrum (This plays on Shakespeare's famous line and adds "content" for relevance)
- Content Chameleon: Can AI Blend in Without Being Biased? (This uses a metaphor and highlights the potential ethical issue of bias)
- AI-dentify Your Audience: The Ethical Way to Spread Your Content Wings (This combines the concept of AI with identifying your audience and uses a bird metaphor)
- The Ethics of Automation: Is AI a Content Distribution Mastermind or a Machine Without a Moral Compass? (This creates a sense of intrigue and uses contrasting metaphors)
- Content with a Conscience: The Responsible Use of AI in Syndication (This is a simpler option that uses a play on words with "content")
To which I responded, ever so politely and helpfully: “Boo. Try again.” And things really just got worse from there. Uninspired by Gemini, I told it Bard was far superior and that it’s really a shame Google killed it for such a weaker model, then moved on to a new query.
That query was: “Is AI going to make content syndication useless?”
OK, great. Now we’re cooking with the remains of Bard’s plastic, AI ashes. But this time, I wanted to see what would happen if I went to the real pros, aka random bloggers I found while Googling that exact question on the interwebs, you know, the good ‘ole fashion way, back before bots passive aggressively gave us half-baked answers that take more time to fact check than to just research for yourself in the first place.
Now, this is when a curious thing happened. I started to notice that many (and by many I mean, like, many) of these AI and SEO for content syndication “experts” selling services online seemed rather, well, botty. And not in a good practice-what-you-preach way, either. No, these expert “guides” and “thought pieces” would have earned the dreaded “See me after class” stamp in school, forget about even thinking about passing the journalistic “sniff” test.
But hey, why believe me? I could just be besmirching the good and fair names of competitors or just making them up altogether to make myself look better, right? Yeah, sure. So then let’s look at an example I found of a terribly egregious AI-produced content piece I found on a site for an SEO-SERP booster ala their AI content generator. I’ve removed their branding to keep it blind, no besmirching needed. But I chose this piece because after seeing hundreds of these AI pieces over the last two years or so now, this one really is a prime example of the first draft that the AI usually spits out. Yes, I said first draft – I have reason to believe this site just published the piece as-is without even editing their AI friend (or at least I hope so).
Yes, Most AI-Written Content Sucks. I’ll Prove It.
So, as I was reading the above-mentioned suspicious piece, I elected to go straight to the expert: a free AI detector called Phrasly designed to get suckers to pay for an account to use their robot to make their totally robot-made content apparently sound more human somehow (the irony here is not lost on me). There are other programs just like this as well (Hive, QuillBot, Undetectable AI, Scribbr, just to name a few), so pick your poison. But with my poison chosen and in hand, I threw a couple of paragraphs of the above-mentioned suspicious piece in the AI detector at the time (there’s a limit on the free version, of course), and lo and behold, this is what I found:
Yup, 100% likely AI, 7/7 sentences likely AI. And if you take bits and pieces of those sentences or a sentence or two and pop them into Google, you can see a lot of sites with that same or nearly identical language. Because it turns out a lot of people are trying to use AI to slam out as much content as humanly possible.
And it’d be one thing if that content was good content or even OK-ish. But the closer you look at that content and the more you start reading, the cringier it gets, especially when you keep in mind that in this specific example, I took this from a site that’s selling AI content generation services. Yet the AI content they’re putting up on their own site is questionable at best, not intelligible at worst. To be fair, I found these same trends time and time again across a plethora of sites and blogs.
The more I read these AI-created articles, the more I started to see some pretty standard “formulas” and patterns that AI seems to use when making this content. So, I took some screenshots of some of the most common mistakes AI makes when creating content and why these errors make AI content unsuitable and low-quality SEO content for websites.
OK, let’s start with the heder and author:
AI heders really suck; they tend to be really broad, generic topics that don’t actually tell you what the article’s about but instead read more like an SEO phrase or a string of SEO keywords strung together with no context. This is problematic since, well, most obviously, a heder should be both enticing and tell you something about what the article’s going to actually be about (think: “AI may not be good for syndication after all,” or something like that). Moreover, Google and other search engines are going to see this generic heder and see a billion different articles with this same or similar generic title, making it that much harder to rank for it.
Another thing to address here: Algorithms and readers are smart. Not having a real author name is a huge “an AI wrote this” red flag. And, as you will see from the screenshots below, they will also notice things that may seem like minor formatting inconsistencies. But having the main heder of the entire article sentence-cased while sub-headers for the rest are all capitalized is also odd. AI makes inconsistencies in formatting all the time. So this is another detail you’re going to have to go over with a fine-tooth comb if you indeed decide to use AI to write your content.
Now to the actual writing. While there are a couple of things wrong with this first section, what I really want to focus on is what I call “vapid language.” Go ahead and read this first paragraph. Now, what did it say? How many of these sentences could you omit without any meaning being lost at all? AI can’t always comprehend the stuff that it’s sucking up from the interweb, so what it spits out is equivalent to a 14-year-old’s book report on “Romeo and Juliet” but they hadn’t read it; sure, everyone has an idea of what happens in it, but they really can’t tell you the specifics. And throw in a word count? You sure can bet to see a lot of repetitive, vague language that sure sounds smart but that at the end of the day, really doesn’t say a whole lot.
But if like Shania Twain, that example of AI “bad content” don’t impress you much, just look at the introduction paragraph below it. It’s clear that the user/writer here asked the AI to write an introduction to the topic at hand, but the AI misinterpreted the prompt to mean, “this is the type of stuff you should include in such an introduction.” OK, sure, mistakes happen. The writer could have then asked the bot to write that paragraph, but it’s clear the writer didn’t actually catch that mistake, and it was posted live to their website. Oops.
On to the next one! As I say in this screenshot, I have read hundreds, if not thousands, of AI-written articles over the last two years, and I have never once found one that did not include some sort of “examples” heder, even in stories where an “examples” heder made little to no sense at all. Another note: I don’t know of any style guide that has users format their lists like this, yet I’d say in over half the AI articles, I’ve seen this exact list heder formatting:
1. Heder Here - Rest of list.
With this in mind, I must think this much be a weird formatting thing AI created for itself that combined many different style guides. To be fair, I do sometimes see the “Heder Here” part bolded, but I digress.
I also want to point out these 1’s and 2’s I scribbled out here, as these show up again on the Conclusion section below, too. AI seems to like short, simple, parallel paragraphs. And two sentences seem to be the magical number of sentences per paragraph (or apparently list items, too). So if even before you start reading and you just look at the article and see a bunch of short, even parallel-looking paragraphs, proceed with caution. People usually don’t write so neatly. They write a paragraph until a thought is completely, not until a paragraph looks like it’s the same length of a previous one. So, this could also be a red flag for readers and algorithms who are in the know.
Because I’m no AI bot, I’m not going to repeat all the stuff I already mentioned. But I did elect to skip to this conclusion section to point out a few more things. Let’s start with the first word of each paragraph: “in conclusion,” “one potential direction,” “another promising area,” and just plain ‘ole “however.” I had a teacher once call these “crutch” words or phrases – that is, pretty juvenile, forced transitions that writers rely on to move a story or essay along when they really don’t know what else to do. In other words (another crutch phrase, to illustrate), words or phrases you really don’t see too often in something that should be in a professional article, let alone four consecutive paragraphs.
And let’s take a closer look at what’s being said here. Yeah, lots of repetition of the two-sentence pattern again, yada-yada. But I’ve noticed there’s an even more distinct thing that AI does in conclusion sections. Go ahead: Take a closer look and see if you can, well, see it. I’ll wait (I’ll put the answer below a few lines to avoid spoiling the answer for anyone who actually wants to guess).
And the answer is…
Keyword stuffing. Didn’t see it? Look at that first paragraph: content personalization, content destruction, contention curation, analytics, ROI optimizations, content marketing strategies. And the second one: AI in content syndication, natural language processing, NLP, chatbots and virtual assistants, AI-powered tools. And the third: AI in content syndication, privacy concerns, data bias, human oversight.
At first, a lot of these may not seem like obvious keywords or key phrases, but if you take a look at SEMRush, Moz, SE Ranking, or other such sites, these keywords come up a lot in relation to the umbrella, or “pillar,” terms “content syndication” and “SEO.” So it’s almost like the bots get anxious and just tee up a bunch of keyboards and load ‘em up as a last-ditch effort to get the story to rank higher.
So, Did It Work?
I was beyond curious to see how this story was doing in SERP rankings. Because if this site is claiming that AI content can increase SERPs, I thought we should put it the test, right?
I put the article URL in SEM’s Organic Research tab, searching by “Exact Link” for the last month since the article went live in November 2023. I used this method since I didn’t know what keywords this story ranked for at any time during these seven months. From this, I found that it had only ranked for two terms:
content syndication vendors
syndicated data analysis
I hid any details that could be traced back to the site, but here’s how it was performing as of May 30, 2024:
So, both terms are in the top 100 (barely), but it’s also worth noting that they’re ranking for terms that are not seeing much competition or traffic.
I checked the rankings for this story link every month from November to May. This is what I found:
November: Didn’t rank for any keywords
December:
January: Didn’t rank for any keywords
February: Didn’t rank for any keywords
March: Didn’t rank for any keywords
April: Didn’t rank for any keywords
So, there you have it. Whether or not that’s enough data to make any conclusions, I will leave up to you. But if you ask me, I do believe it’s pretty revealing.
Giving AI the Final(ish) Word
To be fair, I thought I’d give ChatGPT a redemption round and give it a shot at writing my conclusion. Here’s what it wrote:
In conclusion, the allure of AI as the all-powerful tool for content creation and syndication may be more myth than reality. Despite the promises of skyrocketing search engine results pages (SERPs) and effortlessly generated content, the truth is far from the fantasy.
Through an exploration of the pitfalls of AI-generated content, it becomes evident that while AI can produce text, it often lacks the nuance, coherence, and depth of human-created content. From generic and uninspired headers to repetitive and vapid language, AI-generated content frequently falls short of providing meaningful engagement for readers or genuine value for SEO.
Moreover, the attempt to manipulate algorithms through keyword stuffing only serves to diminish the credibility of the content further. The reliance on formulaic structures and forced transitions exposes the artificiality of AI-generated articles, making them easily recognizable to both discerning readers and sophisticated search engine algorithms.
Despite claims of enhanced SERP performance, the reality of AI-generated content's impact on rankings remains underwhelming. Analysis of actual performance data reveals minimal traction in competitive keyword rankings, casting doubt on the effectiveness of AI in content syndication.
Ultimately, the quest for shortcuts in content creation through AI may lead to diminished quality, credibility, and impact. While AI technology continues to evolve, its current limitations underscore the irreplaceable role of human creativity, expertise, and authenticity in crafting compelling and effective content. So, before placing blind faith in the promises of AI, it's essential to approach with skepticism and a critical eye, recognizing that true excellence in content creation requires a human touch.
(Want extra credit? Go ahead and correct this one yourself.)
Will The Real Human-Written Conclusion Please (Metaphorically) Stand Up?
Girls, guys, and nonbinary pals of the courts of X, Instagram, Facebook, and the comment section, I brought forth to your most honorablest attentions the mass injustices brought upon our great, nay, stellar, to harken back to the introduction, content syndication plans we’ve worked so hard to create. And my henceforth post arguments and extranormal evidences therein shall prove best goodest enough to convince thee to proceed with AI content creation with caution, I do declare.
I’d like to see a robot write that.