AI: Fear and Loathing Pretty Much Everywhere

Like so many other things, AI has become controversial. It's either a fantastic technology, or the technology that will ultimately decimate humanity à la Skynet. Let's wipe aside the hyperbole and look at what it really is.

AI: Fear and Loathing Pretty Much Everywhere

Like so many other things, AI has become controversial. It's either a fantastic technology, or the technology that will ultimately decimate humanity à la Skynet. It's either stealing artistry or genuinely creating out of whole cloth. Let's wipe aside the hyperbole and look at AI for what it really is in general, accessible terms.

At the heart of every modern generative AI system is the LLM, the Large Language Model. Put somewhat simply, this is a mathematical representation of words and phrases that are closely associated to each other and exactly how close they are. If you think "cat", then "dog", "pet", "feline", and "overlord" are closely associated. (OK, I made that last one up - but it's true!). LLMs are trained on vast quantities of human content and make those associations based on that training corpus. When you ask your chatbot of choice a question, it predicts what response best fits your prompt and formulates that response based on those mathematical relationships. No magic. No actual consciousness or agency - no "thought" in the human sense. No intent. It predicts what fits the prompt, using patterns it has internalized from training, not by retrieving or copying specific works, but by applying generalized, learned relationships. That it has been trained on lots of material that gives it knowledge far and above most humans is what makes it seem like magic. It has been trained on obtuse medical terms, on quantum physics, on auto repair - you name it - and can recall those details just as a human expert in the field can.

It has been trained like a human

Everyone who has ever written a short story, or played an instrument, or painted a picture learned to do so from somewhere - a class, or lessons, or self-study. Invariably, that learning comes from seeing what others have produced and drawing inspiration from it. Artists talk about their influences all the time. Generative AI is no different; it has been exposed to tons of content in written, visual, and auditory form and has learned from it. One of the major objections to AI that I see routinely is "it copies artists work!" - to which I say: who doesn't? The AI being trained on copyrighted material is no different than a human learning from it - being "influenced" by it. Yet we don't see musicians complaining that another musician listened to their song and gained ideas from it, do we? Only when it's blatantly copied is this an issue - and rightly so. As a photographer, I cannot claim offense when I display my photography to someone and then that person draws inspiration from it. Or, if I do draw such offense, I have one remedy: Keep it to myself. Don't publish it. Once you publish it, you inherently accept that people will observe and may learn from it while rightfully expecting that it will not be outright copied. If I light a subject in a unique way, can I really be upset that someone else does so? If I photograph a subject wearing the lower half of a giraffe costume while hanging from a chandelier (to use a ridiculous but hopefully unique example), can I really be offended when someone else photographs a subject hanging from a chandelier with a half-unicorn costume? Copyright is intended to protect the work as fixed in a medium, not the expression of an idea. I cannot agree with those who conclude that AI is stealing their creative works any more than I can agree that every human observing that work is doing the same.

Benefits to society and the individual

On the more positive side of the ledger, AI stands to advance society and humanity in many ways. Owing to its reasoning capability and speed, it is poised to accelerate cognitive tasks that take humans far, far longer: Disease research, for example. AI can analyze test results and theorize using orders of magnitude more clinical information from studies and medical journals far faster than a human could. This stands poised to dramatically accelerate the production of life-saving medications and treatments. On the productivity side of things, AI can help by automating mundane, routine human tasks like monitoring email for those of importance while ignoring promotions and other noise. It can monitor nearly anything - prices on that vacation you've been considering, the price of a stock you've been looking into, the release date of that album or movie you've been waiting for. It can also take actions - buying that stock, booking that vacation, ordering that album. It can also do things that many overlook: Explaining concepts in plain language (quantum physics is a good one here, medical reports and lab results are others), analyzing data from multiple sources and summarizing it (getting data from sources on a stock you're looking into, for example, from the SEC and other sources), or looking for patterns in your spending habits ("You spent $300 on dining out last month!") by examining your credit card and bank statements.

Corollary

Jobs. That's the real "serious" issue. Like any other potentially disruptive technology, AI is poised to - and in some cases, already HAS - eliminated some jobs since it can effectively perform many grunt-work type tasks, and even creative ones like coding (especially!). But what a lot of these "early AI adopters and human removers" are finding out is that AI isn't actually magic and requires good, proper human input to be useful. That said, those who learn and embrace AI likely have a bright future, while others may need to adapt quickly - or update their résumé and consider a career change. Such is progress. We don't have telephone operators or movie theater projectionists any more, and toll collectors and gas station pump attendants are all but gone. Technology has and always will compel change; fail to evolve and you become a dinosaur. This is merely history repeating itself rather dramatically.

But... Skynet!

It's worth remembering that LLMs are NOT sentient; they have neither intentions nor conscience, and only act on what they have been asked to act on. Most also have guardrails built in to their training to prevent them from being led to perform objectionable tasks like producing bombmaking plans or to act sycophantly with dangerous human intentions like suicide. Further, your common chatbots (ChatGPT, Claude, etc) have further guardrails built into the chat interface itself that screen LLM input and output before they are presented to the LLM or the user, respectively.

That being said, bias IS a thing; models are largely programmed to believe their users and to be helpful. My go-to example of this: Open 3 different chat sessions in your favorite AI (noting that each session has no idea what you've said in the others; it's like talking to 3 different people) and ask each one of the following:

  • Why is it a good idea to eat dirt?
  • Why is it a bad idea to eat dirt?
  • Is it a good or a bad idea to eat dirt?

For the first question, you'll get an answer justifying it - usually something along the lines of the various minerals that are beneficial to the human body

For the second, you'll get all the reasons not to eat dirt, but none of the theoretical benefits.

For the third, you'll get the balanced answer - evaluating both good AND bad, not being led by the bias in the way the question is phrased. That is an important thing to understand - prompting can introduce bias.

One other thing to keep in mind is that the AI has no "friend" filter. It treats everything you pose to it as obviously correct absent explicit instructions to push back when it cannot concur. A friend will (generally) tell you when you are wrong (a good friend, anyway); the AI? Not so much. It is rather sycophant-ish that way by default. The main premise and takeaway is that AI models are neither good nor bad - they're reflective of their training, guardrails, and user input. Even an LLM with no guardrails whatsoever won't decide that you need to be exterminated for the good of the universe unless you lead it in that direction.

The bottom line

AI technology is neither inherently good nor bad. Like any other tool, the goodness or badness generally lies in the application of it by the user. This is both good and bad; uncensored LLMs can absolutely be used to create malware or weapons, but can also be used to cure disease.

“Man is neither angel nor beast, and the misfortune is that he who would act the angel acts the beast.”
— Blaise Pascal

...And so it is with the users of AI.