What people mean when they say “I hate AI”
It’s getting pretty common for people to say that they are opposed to AI and it’s not hard to understand why. AI is crawling into every service and organization. But the thing I’m trying to figure out is what exactly is it that people are against.
AI is an old field. It has been in use for decades. Most people who say “I hate AI” don’t mean traditional AI like chess engines or fuzzy-logic rice cookers. But what do they mean? This essay is my attempt to drill down into this, but I don’t expect there to be a single answer.
“I hate generative AI”
This is I think the most common rephrasing of the claim. AI is fine right until it “generates things”. This feels sensible because the things people tend not to like (e.g. chat-bots / AI images / TTS / deepfakes) make “things”.
But is “making something with a computer” the thing that people object to? People don’t dislike procedurally generated art. I don’t recall people being upset by traditional text-to-speech.
I’ll go further and say that “generative AI” is not really well-defined. Is a speech-to-text system “generative”? It generates text. It can use a transformer neural-net architecture. I don’t think people are generally against AI-powered voice transcription.
“I hate large-language models”
I think the next most common thing I hear is this. Probably because LLMs are the thing everyone is throwing their money into right now. It’s interesting that the objection would be centered on a particular approach and not a capability. Are you ok with small-language models? Are you ok with chatbots powered by other AI approaches? And this explanation doesn’t account for fairly common opposition to image generation models.
I tend to hear people say they would be generally supportive of LLM uses that are not chatbots. What if an LLM could read through all these historical documents and show me the few most relevant to my research question? Not all LLM-use-cases will even show the user AI-generated text.
While this is commonly said I’m not sure it’s really what people mean.
“I hate AI that is bad for the environment”
I’m not well informed on the environmental costs here so I don’t have much to say, but this is clearly a big talking point today. But often people conflate a number of related but very different costs.
- The cost of doing the fundamental science
- The cost of training the production model
- The cost of serving the model
There are definitely people that oppose AI on these grounds, but I’m not sure that’s the primary source of their opposition (i.e. they would also oppose it anyways for other reasons). Are these people going to be happy to see Gemma 3n running on their phone? I don’t think so.
“I hate AI that stole IP”
This concern gets a lot of coverage. The argument goes that the training of these new AI models is inherently unethical and so we cannot use them. “If only we could train the models on just Wikipedia or just creative commons images!” one could imagine them saying.
AIs have been trained on non-free corpora for a long time (it was assumed to be fair use). Machine translation systems (e.g. Google Translate) were trained on non-free data and nobody fought agains them. Is that because Google Translate doesn’t compete against the people whose data they used? Google Translate (or things like it) does threaten to put translators out of work.
Another curious element is that the people that tend to use this argument are people that I would otherwise assume would be anti-IP in general. It used to be fairly common to be anti-copyright or anti-patent. Is it just a case of different standards for the “big guys” vs “individual creators”?
People complain when OpenAI generates an image in the style of Studio Ghibli, but have no problem if a human does the same. Both the human and the AI “looked” at the source art and used it to make an output. So is it an issue of scale and ease? I don’t think IP-law is really prepared to answer these questions, but from a moral standpoint I could see someone making this distinction.
“I don’t like the societal effects of newer AI models”
This in my mind is closest to the true crux of the objection for many. I personally dislike blockchain as a technology, but even if I thought it were a good technology I would still dislike the technology’s societal/political consequences.
But what societal effects of AI are people most concerned about? My general sense is that it’s
- Job losses
- Spreading of misinformation / slop
- Loss of human interactions (e.g. increased isolation)
- Increased ease of crime / fraud
- General worry about change
People seem to drop many of their complaints to AI when they see it in use-cases that don’t have these effects. I rarely see people complain about the energy consumption of AlphaFold-like models, for example (even though they are generative models!). Do people object to using deep learning approaches to detect CSAM images? I don’t think so and I don’t think they would even stop to ask if non-free images were used.
I think a useful mental exercise is to consider how you feel about AI for image captioning models. Meta has actually been doing this for a while with little controversy and it improves accessibility for the blind. But it is squarely in the LLM / generative AI world and it also has the potential to spread misinformation and slop. Do you hate that AI?
In the end people’s distaste for AI is going to be multifactorial. But I think it’s important to be clear about what it is you oppose and why. I don’t think the public currently has a nuanced enough understanding of the problems. Saying “I hate AI” isn’t going to be productive, but is there anything else that captures what it is people are thinking