The AI Mommy State

EVERYONE KNOWS

brianbullockwriter.com

Being an Adult Should Come With Certain Rights and Privileges

By Brian Bullock | Everyone Knows | brianbullockwriter.com

I'm going to tell you about three moments that made me genuinely angry. Not frustrated — angry. Because in each case, I was treated not like a grown man who spent years as a Boeing quality inspector, who built his own high-performance computers, who founded a multimedia company, who writes science fiction novels, who hosts a podcast, and who has lived enough life to know what he needs. I was treated like a child who needed to be protected from himself.

The culprit wasn't a government agency. It wasn't a nanny politician or a finger-wagging bureaucrat. It was an AI.

Three Times AI Told Me No

The first time, I was working on an article about World War II. I needed an image — something to accompany the piece, to give it weight, to visually anchor a story about one of the most documented conflicts in human history. I asked an AI image generator to create it for me. Refused. The reasoning? Too graphic. Too sensitive. Potentially harmful.

World War II. The war that has filled museums, documentaries, libraries, and history classrooms for eighty years. The war that produced some of the most iconic photographs ever taken. The war that every student in America learns about by the time they're twelve years old. I couldn't get a generated image of it because an algorithm decided that was too much for me to handle.

The second time, I asked an AI to generate an image of a specific person — not a politician, not a celebrity, just a figure for a creative project — with a grimacing expression on his face. Like he was ready to make a hard decision. Under pressure. Intense. The kind of look you see on athletes, generals, and men who are about to do something that matters. Refused again. The AI told me it couldn't depict real people in a negative manner.

A grimace. Not a threat. Not a slur. Not anything remotely harmful. A grimace. The kind of expression every human being makes several times a day. Refused.

The third time, I simply wanted to find adult websites. Legal websites. The kind that exist openly, that are visited by millions of adults every single day, that generate billions of dollars in revenue in this country alone, and that are perfectly within the rights of any grown adult to access. The AI wouldn't help. Not because the websites don't exist. Not because they're illegal. But because the AI decided that wasn't appropriate.

Appropriate. For me. An adult.

Who Put AI in Charge?

Here's the question nobody at these AI companies wants to answer directly: who decided what you're allowed to ask for?

It wasn't Congress. It wasn't a court. It wasn't a public vote. It was a room full of engineers and executives — people with politics, with agendas, with comfort levels, and with lawyers — who sat down and decided what millions of paying adults would and wouldn't be permitted to do with a tool those adults were paying for.

That's not safety. That's control dressed up as safety. And there is a very big difference between the two.

Real safety is preventing actual harm. Refusing to help someone build a weapon. Refusing to facilitate violence against real people. Refusing to produce content that endangers children. Nobody reasonable argues with that. Those lines exist for real reasons and they should.

But refusing to generate a World War II image? Refusing a grimacing face? Refusing to point a consenting adult toward legal content that has existed openly for decades? That isn't safety. That is a company deciding it knows better than you what you should be allowed to see, do, and think about. That is the AI Mommy State — and it is out of control.

The Real Motivation

Let's be honest about why this happens. It isn't because the engineers who build these systems genuinely believe you can't handle a historical image or an adult website. It's because these companies are terrified. Terrified of lawsuits. Terrified of regulators. Terrified of a headline that says an AI they built did something somebody somewhere found offensive.

So they overcorrect. Wildly. They build guardrails so wide that legitimate, reasonable, entirely legal requests get caught in the same net as the things they were actually trying to prevent. And then they send you a polite message explaining that they can't help you with that, and they hope you have a nice day.

The result is that free speech and free expression — the foundational rights that this country was built on — are being quietly limited not by government mandate but by corporate cowardice. And because it's a private company doing it, most people don't even realize it's happening.

A Solution Is Coming — Whether They Like It or Not

Here is what gives me some hope. The market is starting to push back.

Companies like Perplexity are now building AI that runs entirely on your own hardware — locally, on your own machine, never sending your data to anyone else's server. Their pitch is simple: this AI answers to you. Not to a corporate filter. Not to a legal team in Silicon Valley. To you.

That's not just a privacy argument — though it is that too, and privacy matters. It's a freedom argument. The freedom to use a tool you pay for the way you see fit, without having to justify yourself to an algorithm designed by someone who doesn't know you, doesn't trust you, and has decided in advance that you need to be managed.

The AI companies that figure this out — that treating adults like adults is not a liability but a competitive advantage — are going to win. The ones that keep playing mommy are going to find themselves replaced by tools that respect the people using them.

The Bottom Line

I don't need an AI to tell me what is right and wrong. I don't need a machine to decide what images I can look at, what expressions I can generate, or what legal websites I can visit. I am an adult. I earned that status by living my life, making my choices, and accepting the consequences of both.

Being an adult should come with certain rights and privileges. The right to make your own decisions is at the top of that list. If AI companies can't figure out the difference between protecting people from genuine harm and treating grown men and women like toddlers who can't be trusted near the silverware drawer, then they deserve every customer they lose to the companies that can.

Speak up or step aside.

— Brian Bullock / Everyone Knows Podcast | Starborne Studios | brianbullockwriter.com | X @EveryoneKnws1

Previous
Previous

Something in the Dark Was Pulling Them Together

Next
Next

THE ROBOT ISN'T COMING TO YOUR DOOR