Write​_

· 6 minutes read

Reasons why you should avoid using AI

AI is bad. I suppose that has been iterated and reiterated several times over the past couple years.

That sentence is not completely true, in my opinion. However, AI in its current state is rather unfit for use and right now its only positive purpose is to be a cash cow for big corpos that benefit off of it (think OpenAI and Nvidia) — and not even, as OpenAI suffers losses and people are rapidly getting tired of “AI” as a buzzword.

I don’t think people should completely stop using AI; it is an amazing technology that can actually help us. But is it really necessary to turn straight to ChatGPT just to ask the simplest questions?

Here are reasons why you should avoid using AI (specifically LLMs), unless absolutely necessary:

1. It uses up a LOT of resources.

It has been said that AI uses up 10× more resources than a normal Google search.

If you’re asking for simple things, such as a basic brownie recipe or how to get started with the Python programming language, why not just search it up? The answers are common and simple and using an LLM is simply too much.

Even if you’re not someone who actively cares about the environment, it makes sense to opt for something less wasteful at no extra cost.

Thus, LLMs such as ChatGPT take more time to yield results anyway. Why bother waiting?

2. LLM responses tend to be inaccurate

LLMs are literally Large Language Models. They create coherent sentences based on word probabilities.

LLMs are a glorified version of choosing random words in your keyboard’s suggestions. Anyone who’s tried that knows that they can get nonsensical. However, LLMs are trained on such large amounts of data that they can be pretty on point.

Still, they can miss it. This is because these LLMs care much less about what is fact and more about whether or not the sentences it spouts make sense based on the data it is fed.

Its responses being inaccurate is an understatement. LLMs “hallucinate” a lot — that is, become convinced that a certain idea is true. To simplify it, an LLM can be convinced that one plus one equals three, because “three” is a sensible word to come after “equals”. The latest models are unlikely to make that exact mistake, but that’s the idea.

If you think about it, using LLMs for advice with more serious matters is dangerous. Imagine an LLM that hallucinates on the dosage for a certain drug used to save a person’s life. Imagine an LLM that gives out orders to the military for the kill (which, iirc, is either already happening or going to happen soon). That’s just horrible. Don’t.

3. Many AI models have unethical practices

This point does not apply to just LLMs, but also image generation models and such.

Perhaps there are AI models somewhere which try to be as ethical as possible, but unfortunately the most popular ones aren’t.

Aside from the environmental concerns outlined above, AI does not respect intellectual property.

It does not care whether or not an artist wants AI to be trained on their illustrations, even if they say it explicitly or sign their work (AI has been known to copy these signatures and watermarks as well). It does not care whether or not a website’s developer added a file called robots.txt, which explicitly tells certain scrapers not to scrape the website. It does not care whether or not your work is copyrighted or whatever — what matters to it is that it can access it and dump it into its ever-growing pool of data for training.

And I won’t be surprised if AI has a really big pool of proprietary source code for training.

If a random, not-rich individual decided to scrape that much data, I’d bet they’d be labelled badly and get into legal trouble. If they decided to spout it out like AI models do, they’d be in really big trouble, and not just for plagiarism.

4. AI’s training data can easily be gamed

At this point, you probably get the gist of it: AI is trained on tonnnsss of data, like basically the entire Internet and more. Woah.

But seriously, anyone who’s chronically online can tell you how horrible the internet can get.

Racism. Ableism. Ageism. Fat-phobia. Homophobia. Misogyny. Misandry. Just downright being a horrible person, whether a troll or someone with adverse motivations — you get the point. So many people on the internet suck. And that’s the kind of data that AI gets trained on.

Of course, those corporations who own these AI models are doing their best to stop it adapting these behaviours, but something will always slip through the cracks.

Add that to AI’s hallucinations and you have the recipe for disaster. Oh, and people trying to game its scraping algorithms.

There have already been reports of LLMs giving false phone numbers for certain companies. This can lead people to getting scammed. Sure, we can tell them off for being gullible, but harm can still be done.

In conclusion…

Okay, fine. AI is bad. It might be a wonderful technology, but society sucks the way it is and AI is simply amplifying that and making it suck even more.

The problems I have outlined above is a non-exhaustive list. It is simply impossible to stop yapping — uh, I mean, list out every single concern with AI in its current state. Maybe someday I will write out a full rant on society. But that’s for another day.

For your sake, and my sake, and the sakes of everyone on the planet, maybe think twice before you ask an LLM for a step by step on how to bake a cake or something. Boycott AI.

Of course, if you really can’t find the answers you need anywhere on the internet, I will definitely support your decision to use AI. If you’re in need of a quick translation of a complicated text to another language, or are taking audio-to-text notes of a lecture, knock yourself out. In an ideal world, people would only use Artificial Intelligence for the right purposes.

That’s all for now.

( #AI )

Last commit: 3077454