The weaponization of AI-generated content has reached a whole new level.

A new trend is emerging in the shared “5th dimensional warfare” space we call society these days. Organizations, interest-groups, and even some individuals have begun to weaponize the label “AI-generated” as a way to dismiss or ignore content they disagree with or find challenging. 

They aren’t using such content themselves. No, this is a new weaponization. A new tool for controlling information spaces. Now, they are simply using the accusation of AI-generated content to block, remove, or sow doubt on any content that doesn’t fit their desired narrative for the space they control.

The feelings of apathy, hatred, or even fear people have regarding the use of AI systems for generating articles or videos has created this new opening for bad actors on the battleground of information warfare. And they are starting to use it with relish. get ready to have your own creations used to discredit you…

Accusations are just as good as proof

Social media has certainly long been the place where people go to find refuge from the narratives of various political and cultural sides in this struggle for truth and honesty. And while the bad actors were still there, spreading lies and misinformation to support their viewpoints, they were always able to be countered by opposing views, through comments, counter-posts, and whatnot. If someone presented evidence of “A,” then very shortly others would chime in with their evidence for “B,” “C,” or even “D.” That allowed the reader to see all sides right in the same place, and critical thinking skills combined with a little independent research could do the rest in determining fact versus fiction.

But, what if there was a way to instantly discredit someone’s posts or comments? What if you could “throw shade” on any information with just one simple statement that no one could prove wrong?

That is what the statement “It’s AI generated” has done.

Now, you don’t even have to have any measure of influence or control on a particular platform to block information. You can just use that little phrase to instantly take the wind out of whichever sails you choose, and no one can really defend against it. And, if you do have some influence, like moderation or admin powers, well…

You can’t *prove* beyond all doubt that a particular bit of text is AI generated, but at the same time, no one else can *prove* that it isn’t. When it comes to weapons as part of information warfare, this one is a doozy.

It’s a special form of catch-22 that works almost everytime for the first person to play that card at the table. Once the AI genie is let out of the bottle and into the conversation, it can’t be put back in and it instantly brings fear, uncertainty, doubt, and hate into the equation. The entire thing is tainted now.

Losing an argument? Someone posted an evidentiary article to disprove what you are saying? Dang… but hey! Don’t despair! All you have to do is accuse them of creating some “AI generated” garbage to mislead everyone! If you want, you can even use your own AI tools to create an “evidentiary” article about why that other article is bullshit! And now, while you may not have won this encounter on the forums of social media, you haven’t lost it either, and you did deny your opponent the win.

Another bastion falls…

Still, winning arguments, well, that’s amateur hour stuff there. The real power to utilize this new weapon resides in the hands of those who have just a little bit of control on one of these platforms. Any sort of “moderator,” or “admin” or any other authority can use this to much greater effect. I have recently seen it begin to rollout on one of my own “safe haven” informational platforms,  and it is a bit upsetting.

It came from the place where I was once most active. Reddit. Specifically the r/Collapse subreddit, where people gather to discuss information and news about climate change, conflict, environmental concerns, dystopian late-stage capitalism and all sorts of other topics related to the potential collapse of global civilization. 

AI is one of those societal collapse threats we discussed in great detail, which makes this new development all the more insidious.

A recent rule change has allowed the moderator team there to remove any comment or post that they deem to be AI generated. No proof required. No means of determination given to assure users that the rule wouldn’t be used to take down real content. Their officially declared method of determining what is and is not AI generated content?

“Trust me, bro.”

That’s it. That is how they will decide what is real and what isn’t. 

And that right there is a tool straight out of a narrative crafter’s wet dream.

Now, whatever they want to take down, they can just say it seems like AI might have been used. They have even banned users for utilizing the spell check features of their phones when writing comments. I know, I tested that myself. And got banned for using “AI generated content.” The exact words I let my phone correct for me were “embedded,” and “inseparable.” I commented as you can see in the picture below, with the result being… I was banned.

At any rate, I was actually trying to get that reaction to see what the official position of the moderator team was regarding the use of AI features like spelling and grammar correction. I also wanted to see if they would abuse the brand new system to get rid of me and the challenges I was raising. I had tried asking directly about these things, but the answers I got always sidestepped the question. Sure, I was clearly told that things like spelling and grammar correction would never be cause for removal… but how to know for sure? This was a little test I tried to see whether they would hold to that, or take a different route. My test showed the reality of the situation. 

And that reality is that they will be using this new rule to simply remove any information they do not agree with, or to get rid of people who are pushing the bounds of whatever narrative is being set up. They used the rule to remove my comment and ban my account temporarily, and since they explicitly stated that spelling checks were not grounds for removal, that means they simply abused the new rule to shut me up for a while.

Other users will also be able to participate, flagging and reporting things they believe to contain AI written text, and then the moderators can pick and choose new things to remove from that pool, and thus will be able to shape the narrative of the subreddit much faster and more effectively. It will soon be nothing more than another extremely biased echo chamber reflecting only the “approved” truth. Other interest groups will have free license to brigade over to r/Collapse and mass report whatever they don’t agree with.

And that is sad. I once felt like the r/Collapse online community within Reddit was one of the last holdouts of objective discussions and critical thinkers. That bastion has fallen now…

The Collapse of civilization may be ai generated

There is a concerted effort behind this, and of course it is much bigger than some little sub on Reddit. The weaponization of the “AI generated” label is just the latest tool in a much bigger, and ongoing, effort to control the flow of information among the populace.

This new practice of labeling things as AI generated in order to make people immediately discount things stems from various factors. Resistance to change and innovation is one of them. The rapid advancement of AI can be unsettling for some people, and there are still very big unanswered questions about what it can and cannot do, as well as how it will be used. People naturally fear and hate what they do not understand, and that leads them to reject content they perceive as AI-produced, regardless of its quality or the validity of the accusation. You can make someone hate or disregard  something pretty quick just by saying, “Oh, that’s just AI slop…” No evidence required.

Potential job displacement and a devaluation of human creativity is another fear people have about AI. The increasing capabilities of AI raise concerns about its impact on human roles and creative endeavors. It is a legitimate worry, given just how good AI is getting at creating stuff. But, the better it gets, the more this fear leads to a desire to downplay or disregard AI-generated content.

There is also quite a bit of misunderstanding regarding AI’s capabilities and limitations among regular people. Most individuals have a limited understanding of how AI functions, making them susceptible to misconceptions and biases about its outputs. That is a factor that those wishing to control information can use to manipulate people’s own thought processes and help guide them to incorrect conclusions. 

Another way this new tool of control is being used is to simply make people dismissive of things that maybe they should have been looking at a bit more closely. For those that don’t want them looking too close, dismissing content as “AI-generated” can be a quick and easy way to avoid having people engage in thoughtful analysis or consider alternative perspectives. Neutralizing their critical thinking skills before they even bring them to bear, wow. Talk about an effective weapon for information warfare.

And speaking of weaponizing things, even AI detection tools are used to support this tactic. While AI detection tools have been developed to identify AI-generated content, they are also being used inappropriately to discredit or dismiss genuine human work. AI is so prevalent now, and so capable of mimicking real human content, that no tool out there will really be able to give back a 100% result of findings either way, unless the AI stuff is so shoddily produced that it is immediately apparent, or the human content was written by a 5 year old. And in those cases, no one needs a tool to tell them where it came from. So, that uncertainty in the AI detection tools themselves is being used against real human-generated content. “See?! Even the AI detector can’t tell for sure!”

AI attribution by these tools can also result in a “false positive” without even needing any input by a manipulator. Research shows that AI detection tools can sometimes falsely flag human-written text as AI-generated. There are plenty of articles and studies about it. This can lead to unfair accusations, especially in academic or professional contexts, where AI attribution can have serious repercussions.

This new practice of labeling content as “AI-generated” to dismiss it, or to counter real fact-based arguments, will have many negative consequences. It will do a lot towards undermining productive discussions on platforms everywhere. Such use of this newly weaponized label can shut down important conversations and prevent the consideration of valid arguments or ideas, regardless of their origin. Once that bomb is thrown into a conversation, what you are left with is, at best, a chaotic jumble of accusations and arguments that destroy any progress for the original discussion.

Another use for this tool is the erosion of trust and the creation of an atmosphere of suspicion. Falsely accusing content of being AI-generated can damage the credibility of the content creators on a platform, and foster distrust within the entire community of users. This leaves the controlling moderators or other entities with the sole power to determine what is truth and what isn’t, or at the very least to ensure that no productive discussions can take place.

It can also be used to “make an example” of someone whose work may be a thorn in the side of those seeking to control the narrative of a space. Leveling a few key accusations and “catching” someone in repeated attempts to “mislead” with AI-generated content can completely wipe out any credibility they have. This will go a long way towards discouraging innovation and exploration of new ideas. The fear of having one’s work dismissed as “AI-generated” will stifle creativity and discourage individuals from experimenting with new tools and approaches, which is exactly what those controlling and censoring a space would want. 

It’s a feature, not a bug

What makes the entire thing worse, and why this tool is so effective, is that there are many valid concerns about AI-generated content.

There is a lot of “built in” potential for misinformation already. AI models, especially those for generating text or media, are capable of producing inaccurate information or “hallucinations” (confident-sounding but false statements), according to experts in the field. This can lead to the spread of misinformation or even disinformation, which can be referenced later to create other intentionally misleading content.

There are copyright and plagiarism concerns too. AI models are often trained on vast datasets that usually include copyrighted material. When an AI generates content that closely resembles existing works, it can raise concerns about copyright infringement and plagiarism. Just another thing to help set that fear, uncertainty, and doubt in the minds of everyone.

When you look at the social media use of “algorithms,” you discover the risk of the reinforcement of cognitive, social, and cultural biases. AI algorithms can reflect and amplify biases present in their training data, and also within the content they are working around, leading to the perpetuation of stereotypes or discriminatory outcomes, and also misinformation. This is particularly concerning in controversial areas of discussion such as politics, law enforcement, and things like climate science, where biased decision-making can have serious consequences.

There is also the erosion of accountability to consider. The proliferation of AI-generated content, especially deepfakes and realistic but synthetic video media, is an issue. This can erode trust in traditional media and institutions, and raise questions about accountability. AI systems cannot be held legally responsible for their outputs, leaving businesses and individuals who publish such content liable for any harm caused. Thus, having one’s content falsely labeled as AI-generated can seriously harm both people and businesses.

Conclusion

I could go on, but the simple fact is that there is a lot to fear and hate about AI-generated content. It has been a destabilizing thing ever since it first burst onto the scene of the mainstream world, and that just increases its effectiveness as a weapon for control.

Either by using AI directly to spread false information, or by baselessly accusing others of doing so, this may very well be the “5th dimensional warfare” equivalent of a nuclear weapon. In the wrong hands, or even in the right ones, the danger is real, and increasing.

It’s important to approach all content with a critical mindset, evaluating its merits based on the veracity of the information it conveys rather than making snap judgments based on assumptions about its origin. 

Most social media has fallen already, or is in the process of it. The mainstream died long ago, of course. It is sad to see my old holdout of r/Collapse head down this path, but I knew it was coming. No digitally accessible space will be proof against those who will use this weapon to control their narratives and spread their view of what is true and what is not.

This will spread. Use of this new weaponized tool for information warfare will increase. You, dear reader, must be vigilant. You will have to defend yourself because you can’t rely on anyone else to defend you. 

So, when someone tries to tell you that something is false, or AI-generated, and their only evidence is “Trust me, bro,” you need to take that with a big grain of salt. And that includes those who you have set above you to be your guides and protectors.

The truth is that you don’t need anyone to hold your hand or steer your mind when it comes to information. You have a critically thinking and analytical mind that can do all of that for you. As soon as you abdicate that authority and transfer it to someone else, you lose the power to know what is real and what is not. So don’t relinquish it. Don’t allow your spaces to be censored. Don’t let moderators or admins or whatever convince you that they know better than you what is true or not. Don’t let them “clean out the garbage,” so to speak, when it comes to your content. You do that. See the whole picture, the good, the bad, and the ugly, and determine for yourself what is real.

Or do not. And be led in whichever way they choose to lead you. The choice, for now, is yours.


Discover more from Wasteland By Wednesday

Subscribe to get the latest posts sent to your email.

1 thought on “The weaponization of AI-generated content has reached a whole new level.”

  1. I typically found that most sub-Reddit’s tend to crystalize into a new phase when they pass the 80,000 user mark, it is when the moderators cannot do it part time any more and their control starts to become more apparent. r/Collapse had been one of the standouts in that they managed to keep things going smoothly up until about 500,000 users. But even they have started to fail over the last few months.

    Saying something is ‘AI slop’ is akin to how you could shut folks down in the 1950’s by saying they are Communists. If those that fear truth can find a catch phrase that can shut down opposition, they will use it. And yet those that build these generative AI are so afraid of being left behind that they are actively building the weapons of uncertainty hoping that they will be able to control it better than others. “If we don’t, they will” that is the mantra of total moral corruption.

    One movement that is slowly gaining traction is the Neo-Luddites. Not Kaczynski types determined to blast us back to the stone age, but more those that are much more critical of technologies and figuring out what works and what doesn’t. It is still early days but there is hope yet.

    Nate Hagens who summarized it wonderfully, there will be a few kinds of people when it comes to AI. The resilient ones, those that never encounter it or know about it but choose not to use it. And then there are the ones that will be harmed when these things become too costly to run, those that integrate it into their lives or worse yet become completely addicted to it. It is a rough road through to when these things become unavailable to the average person but that doesn’t stop us from being subjected to these things and those with the least ethics will be those that are willing to use these the most for their own gain.

Leave a Comment

Your email address will not be published. Required fields are marked *

Come with me if you want to live!

Receive new articles from the wasteland direct to your inbox!

Subscribe to The Wasteland via Email

Enter your email address to subscribe to my blog and receive notifications of new posts by email. And never fear, I will never sell your data or spam your inbox.