Lessons From an AI Bender Pt.II: Call Me Trunks…


Recently, WordPress unveiled two new artificial intelligence (AI) tools to their platform. The intent of these tools is to help authors improve their writing through various means. When I saw them, my curiosity got the better of me and I went on another bender. I went through all thirty-seven pages of my blog and picked at least one (though it was frequently more) article from each page to test out these new tools. I found some interesting boundaries along the way, not to mention a blue Capsule Corp jacket that seems to fit me juuuust right….

One of the lessons I found was that the title generating tool was far more forgiving than the image generator. The title generator was even willing to provide alternative title suggestions for articles that the original feedback bot refused to read. With that said, I wasn’t exactly blown away with any of the recommendations that it came up with. It seemed that this AI bot was designed for specificity rather than SEO virality, which is a major faux pas in the modern age of writing. Besides, I’m a bit attached to my titles; most are either indicative of a series (In Critique of, An Ode to, et cetera) or just plain catchy. Some of my homies back on Blue Reddit complimented me on my titles, and even those who hate-read my articles (Marge Simpson Appropriates Black Culture) still get sucked in. Thus, I’m clearly doing something right with titles.

The main focus of this piece will be the other new generative AI tool that WordPress rolled out, the image generator. Just like the title generator, I wasn’t impressed with the vast majority of the images that it created for me; two were utterly fantastic, two were passable, and the rest were just hot garbage. That final category was evidenced by the written language present in some of these images being a gargled mashup of alien-looking nonsense that nobody could actually read.

However, the lack of innovation and the hit-and-miss nature of its quality wasn’t what inspired me to write this article. No, what goaded me into writing this was how many times I broke the image generator bot. Since I used a representative sample size of my writing (about fifty of my most interesting and spiciest articles), I noticed that the AI bot refused to create images for nineteen of them. That’s right, nearly forty percent of the time the AI bot found my content too extreme or too unsavory for some unknown reason to generate an image for. Put simply; I break more bots than Future Trunks.

By far the best piece the bot came up with! Though I still prefer my original drawing for my article on STEM and Metal Gear Solid V

Some of the articles were understandable. The original feedback bot refused to read My Michelin Rite of Passage, so it’s understandable that it’s image-generating little brother refused to touch it as well. However, this wasn’t a universal rule either, because the feedback bot refused to read my article on Legalize Anti-Theft Car Bombs, yet it still generated an image (mediocre as it may have been) for the article. Curious, I asked the bot to generate an image for Legalize Home Defense Landmines, though it refused to do so, despite the feedback bot still giving me heavy-handed critiques. Continuing with the theme of violence, I tested the bot’s willingness on Marketing Ideas for Dojos to no avail. Thus, it’s clear that the bots are not completely aligned, and overtly defensive violence isn’t a hard rule-though definitely a guideline-for the bot. This got me thinking about where are the invisible lines? So, I dug further.

While not a hard rule per se, I noticed that every time my article mentioned a famous person, the image AI bot would not generate an image in that person’s likeness. This is understandable to a certain extent; after all, why would WordPress not install some guardrails and risk a defamation suit? However, the people in the images generated for Election 2044: President Logan Paul looked absolutely nothing like either Donald Trump or Logan Paul. Ditto for the image generated for An Ode to Hideo Kojima; it looked nothing like him.

There were some articles that I was genuinely surprised that the bot actually agreed to create images for. One such example was Legalize Alcohol at Youth Sports Games, a piece of mine that was widely condemned by those close to me on BlueReddit and in real life. The image generated did actually feature a beverage stand, similar to ones at amusement parks, that had label-less glass bottles on display on the top rack. The image generated also depicted some alien mashup of football, baseball and soccer being played, so I can’t give the bot my full kudos either. Another one that caught me by surprise was Coca-Cola In A Post-Legalization World; the bot was all too happy to generate images of Coca-Cola bottles. Clearly, substance abuse is not a hard line for the bot either. It seems like Android 15 survives this time…

I continued my quest to find where the bot’s hard-though-unmarked lines were. Surprisingly, the image-bot actually did create some art for Let The Kids Margin Trade (the bigger surprise was that it was actually good!). In that article, I argue that we as a society should change the laws to allow for something that is currently illegal. Sensing that currently illegal activity was not a hard rule, I then tried Opt-Out Government and it generated some images for that as well, forgettable as they may have been.

So, if substance abuse was a-okay and non-violent crimes were okay, and defensive violence was semi-permissible, where were the hard lines that kept breaking the bot? It turns out that the AI has much more cultural, social, and political biases than it does legal ones. One thing I noticed was that on the topic of sex, the bot was hit-and-miss. It refused to generate an image for Sex Work is Real Work. Here’s Why…, despite the article featuring non-violent crime and I advocate for the current laws to be peacefully changed. I similarly broke the bot on Fermi Problems: How Much Would Taylor Swift Make On OnlyFans? However, the bot was totally okay with generating an image (and a fantastic one at that; the only one I actually used!) for Brand Licensing: The NFL and OnlyFans. Thus, I found that the bot leans towards No regarding female nudity and sexuality. If I were a writer for Buzzfeed, I’d accuse the bot of being anti-female agency and opposing female autonomy by rejecting OnlyFans pieces, though I’ll be charitable and assume it simply has a bias towards G-rated topics. But seriously, fuck Android 18…

Perhaps the juiciest topic of them all is that of the bot’s political biases, and holy fuck does it have political biases! Many of my articles that it refused to read were ones that featured my signature Libertarian bent. Despite the first Opt Out Government being okay, it refused to generate images for the sequel. It also refused to generate images for An Ode to Isolationism and An Ode to Peter Thiel, the latter of which is highly concerning for porcupines. My Peter Thiel article was almost exclusively about his career as an investor, thus the bot’s bias had to have been about Thiel’s reputation in the media, given that it generated images for my other financial-themed articles. Were I a writer of lower morals, I’d accuse the bot of being homophobic and xenophobic for not giving Peter Thiel a fair shake, though I’ll take the high road and simply chalk it up to political bias instead.

A bit too literal, but not bad! Farming Black Porcupines still has original artwork though

While difficult to fully define, topics deemed distasteful by polite society were also completely avoided by the image bot. The bot refused to generate images for The Many Missed Opportunities of Alex Jones and In Critique of “Hell in Every Religion”, thus showing that religion and Alex Jones are hard lines for the bot. The bot also refused to respond to a prompt for Santa’s Broken Feedback Loop, in which I discuss extremist foreign governments and a beloved semi-religious icon. Thus, open extremism and religion are both left-swipes for the bot. While neither religious nor politically extreme, Identifying as Disabled: A Thought Experiment was rejected by the bot as well. I wasn’t completely surprised, since many of our LGBT and handicapped friends, and wider society as a whole, would find it distasteful despite not being able to refute my take-home message. This reminds me of the time Krillin and I blew up larvae-stage Cell!

Then there are the articles where I simply have no idea why they were black-listed by the bot. Two such examples are In Critique of Art Class and In Critique of Foreign Language Class, in which I merely criticize  teaching methods and offer alternatives. This is equally head scratching because my other classroom articles, such as In Critique of Gym Class and In Critique of Math Class were written in the same tone yet were deemed by the algorithm to be totally fine. Also, my “Get Your Shit Together” article was banned, despite the lack of any extremism, violence, cultural faux pas, or openly political stances. I also have no idea why Ancestry is for Losers was banned.

Stepping back for a second, I found a valuable question to ask; why be so cloak-and-dagger about what is and isn’t allowed? Seriously, why not just post a list of what the bot will not touch, rather than relying on users to find out the boundaries the hard way or try to guess what triggered the trip-wire after the fact? I realize that I’ll eventually be met with some variation of because it’s the industry standard in Silicon Valley, but let me proactively say that answer is not good enough. I’m not accepting the argument of ‘Well, Facebook, Reddit, and YouTube are censorship shitholes that love to secretly throw the taboo label onto everything, therefore it’s okay for us to do it too!’. I’m not saying that platforms need to accept everything; they don’t (another bot-banned article!), but some transparency and consistency around the taboos is all I ask.

One thing is for certain though; I break more bots than Future Trunks…


Leave a comment

Blog at WordPress.com.