An open source AI based on Meta’s LLAMA model is being used to create sexbots.

Its creator is thrilled to get to experiment with state art tech, according to this story. He feels that commercial chatbots are “heavily censored.”

And that’s the argument in favor of open source development. Entrepreneurs and artists need freedom to experiment. The ugliness of a chatbot that engages in graphic rape fantasies is a small price to pay for all of the wonderful and beautiful things that might emerge from fooling around freely with AI.

After all, there’d be no Internet goodness without porn badness, especially in the early days. I’d wager that most innovators of any sort have never been particularly comfortable working with the constraints of regulations or propriety.

But calling open source AI “code” or “a model” along with a cute name or acronym doesn’t do it justice.

Open source is AI plutonium. We’re being told that we must tolerate the possibility of deadly weapons in order to enjoy power generation.

It’s not true. Sure, the strides made by using open source, like LLMs, gave developers the easiest path to the quickest results. Online customer service will never be the same. A generation of kids can cheat better on their homework assignments. AI in government and businesses is culling data to find more patterns and make better predictions.

But we can be sure that development is underway on applications that are illegal, possibly deadly, and which certainly promise/threaten to change the ways we work and live. And there’s no way to find those bad actors among the good ones until their badness appears in public.

It could even impact us and we wouldn’t know that AI was responsible.

So, we might never figure out that errant AI have been quietly manipulating stock prices or skewing new drug trials. It could sway elections, entertainment reviews, and any other crowdsourced outcome. Bad actors, or an AI acting badly, could encourage social media ills among teens and start fights between adults.

It might even start a war, or decide that nuclear weapons were the best way to end one.

Unlike plutonium, there’s no good or reliable way to track or control such outcomes, no matter how transparent the inputs might have been.

In true Orwellian fashion, the CEO of a site that promotes open source argues that the real risk is from businesses that are “secretive” and take at least some responsibility for their AI models, like Google and OpenAI.  A VC exec who promotes AI worries that relegating development to big companies means “they’re only going to be targeting the biggest use-cases.”

It’s a false dilemma. I’d happily “censor” a porn application in exchange for a cure for cancer, especially if it came with the likelihood that the world wouldn’t get blown up along the way.

[This essay originally appeared at Spiritual Telegraph]