The so-called “AI” is not intelligent

Please, stop calling these things “AI”

What the media currently calls “AI” is actually a far cry from the sci-fi concept we associate with the word. Although artificial, ChatGPT et al are not intelligent, and treating this kind of technology as if it has thoughts and understands meaning, and other abstract concepts only overestimates what it really does: mimicking human communication.

Imagine a parrot, or a raven, or any other kind of bird that copy human speech. They learn by repetition, just copying what was said before, usually in random contexts. Sometimes they can string what seems to be fully coherent sentences, but they don’t really understand what the words mean, they are just repeating something because it generates some reaction from the humans, and then adjusting it through positive reinforcement. Although these kinds of birds can be very, very smart, they have no concept of “meaning”, they don’t get the inferences needed to deal with the required abstraction to use human words.

The same goes for the so-called “AI”: they are just copying stuff that’s already there and passing it as if it’s human-made. There’s even a term for this: stochastic parrot. “Stochastic” is a word that means some sort of random distributed probability, and “parrot” means, well, the bird that can mimic the human voice.

I think it’s clear where I’m going with this.


The machine isn’t alive, neither does it have consciousness

This article from The Verge talks about people creating true relationships with computer programs made to imitate humans, and the difficulties and shortcomings that come from this. I won’t pass judgment on lonely people wanting some company or understanding and not finding it in other flesh-made humans, especially because the emotional comfort or attachment these people feel is very much real, even though the being responding to them isn’t and can’t know what an “emotion” is.

However, right at the beginning of the article, there are mentions of how we, humans, imbue meaning or intention on everything around us. I’ll straight away quote three full paragraphs here because they touch on the subject in a way that can’t be any more clear:

Our tendency to imbue things with human-like minds is a fundamental fact of our cognition. Scholars as far back as Xenophanes noted the habit and linked it to religion, that belief in human-like intention behind the world and everything that happens in it. More recently, anthropologist Stewart Guthrie built on this idea and posited an evolutionary explanation. As social animals, the most important factor in our survival and success is other people. Therefore, when encountering something ambiguous — a shadow, an oddly shaped boulder — it’s better to err on the side of seeing another person, the penalty for sometimes getting startled by a shadow being lower than mistaking friends and foes for boulders. Whatever the reason, the reflex is now hardwired, involuntary, and triggered by the slightest human resemblance. Studies show that by the age of three, children will attribute beliefs and emotions to simple geometric shapes moving around a screen.

Few behaviors trigger the anthropomorphic reflex as powerfully as the use of language, as demonstrated by the first chatbots. The most famous example occurred in the 1960s at MIT, when the computer scientist Joseph Weizenbaum tested an early dialogue program called ELIZA. It was a simple program, designed to mimic a Rogerian psychotherapist, mostly by rephrasing the user’s statements back to them as questions — “I need some help”; “what would it mean to you if you got some help?” for example. Users were entranced. People familiar with the program’s mechanics nevertheless asked for privacy with the terminal. Practicing psychotherapists heralded the dawn of automated mental healthcare. “What I had not realized,” Weizenbaum later wrote, “is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in otherwise normal people.”

The delusion, in Weizenbaum’s view, was inferring from the use of human language all the other capacities that typically come with it, like understanding and empathy. It was not enough to intellectually know how the machine worked; seeing things clearly requires “very great skepticism,” Weizenbaum wrote, “the kind we bring to bear while watching a stage magician.” He worried that failure would lead people, and society at large, to place machines in roles that ought to remain the preserve of humans.

So, just because something can communicate like a human, doesn’t mean there’s really a human intention behind it. But, supposedly, our brains are wired in a way to think that there might be because it’s better, evolutionary speaking, to think this way as a social creature than to think there isn’t. It’s sort of a self-made delusion.

One sentence I would like to repeat, though, is this one: “The delusion was inferring from the use of human language all the other capacities that typically come with it, like understanding and empathy”. This is important because it hits on our weak spot: communicating like us makes it seem like it thinks like us but, in actuality, there’s no thought or meaning at all, just statistically chosen words.

However…

This view of anthropomorphization as a potentially dangerous error remains widespread. But there is another view, which is that anthropomorphization is inevitable and that taking advantage of it can be a great design hack. Studies show that people find machines with names, voices, and other human traits more likable, trustworthy, and competent, attributes any designer would want their product to possess.

So, yeah, people can take advantage of our predisposition to anthropomorphize things to alter our perception of said things, which is mostly what these “AI” companies have been doing. They are using an in-built “biological hack” our minds have as a way to sell their useless product, and most people tend to fall for it because they either don’t know the technology enough, or because they convinced themselves that the machine is, in fact, alive and can think like a human.

This, however, is a fallacy that has followed the brain-mind sciences for as long as humanity started to question its own existence and consciousness.


The machine isn’t modeled after the human brain, nor does it work like one

These programs aren’t copying the processes that happen inside a human brain. Many people, especially in the computery fields, think they are because of the “neural networks”. It’s important to know that the “neural” in this case is just symbolic, just like when we say “there’s a bug in my software” it doesn’t mean there’s a digital representation of an insect wreaking havoc in the code. “To boot” a system doesn’t mean you’re going to literally kick a computer (…well, maybe, I guess).

In this article on Aeon, a person who actually studies the human mind makes a pretty clear case that the human mind doesn’t operate like a computer. Can you imagine the breakthroughs we would’ve had in neurology, neuropsychology, and psychology itself, if a computer scientist actually modeled the workings of a human brain that could perfectly mimic how our grey matter actually works? The jump that would’ve happened in mental healthcare?

But alas, that didn’t happen.

As expertly put:

No matter how hard they try, brain scientists and cognitive psychologists will never find a copy of Beethoven’s 5th Symphony in the brain – or copies of words, pictures, grammatical rules or any other kinds of environmental stimuli. The human brain isn’t really empty, of course. But it does not contain most of the things people think it does – not even simple things such as ‘memories’.

(…)

Senses, reflexes and learning mechanisms – this is what we start with, and it is quite a lot, when you think about it. If we lacked any of these capabilities at birth, we would probably have trouble surviving.

But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently. Not only are we not born with such things, we also don’t develop them – ever.

As someone who studied psychology at university, this was already immensely obvious. As someone who also studied linguistics, doubly so. One of the most mysterious ways our mind works is the attribution of meaning to things, and how that happens, and how it reflects our subjective experience of the world, and how we try to translate that experience — our thoughts — into words so other people can also experience, or at least try to understand, our internal processes regarding the external stimuli. And although we know how some of the biological mechanisms work, modern society still hasn’t figured out the whole thing.

Hell, to this day there’s medication used in mental healthcare that researchers have no idea how they work, just that they produce some sort of desirable result that’s repeatable for most of the population.

This all means that the arguments of “this AI has feelings” or “it can think” or “it has a personality that emerged on its own” aren’t real arguments, just our minds anthropomorphizing something reflecting human-like communication. And since it doesn’t mimic the inner workings of our brain, it’s also incapable of ascribing meaning to things, or of having a subjective experience that affects their expression to the outside world, which is the basis of all human art and communication.

So, no, large language models will never intentionally write an original novel that speaks to the human soul, the same way that a parrot will never be able to recite poetry that talks about its inner experience as a bird. Given an infinite amount of time, it’s possible that it could happen (hence why “intentionally”), but the hypothetical exception doesn’t make the rule.

The article also touches on something that I never thought about before: we always try to explain the existence of our consciousness, or the inner workings of our mind, through the most cutting-edge technology of the time. Currently, computers are the most advanced thing humanity made, so “our minds work like a computer”. To the 1800s people, the brain worked just like a telegraph sending signals down the line. In the 1700s, the brain worked through electricity. In the 1600s, human thinking was derived from “small mechanical motions in the brain”, while in the 1500s the whole body was posited to work just like an automaton. When hydraulics was invented, the human mind was ruled by the motion of four fluids, also known as “humors”. Before that, each civilization had its own explanation for why humans had “souls”, mostly based on religious beliefs.

It seems most of what we have today arose from things like this:

(…)the mathematician John von Neumann stated flatly that the function of the human nervous system is ‘prima facie digital’. Although he acknowledged that little was actually known about the role the brain played in human reasoning and memory, he drew parallel after parallel between the components of the computing machines of the day and the components of the human brain.

Now, how much does a mathematician actually know about the human brain? Not that much, I would guess. To me, this seems a clear case of epistemic trespassing — one that is still so very common to this day. It’s easy to see this online, when people who study Humanities try to counter-argument “AI” bros about how the mind, consciousness, self-reflection, art, creativity, and so on really works, just to be rebuffed by “the machine can do that”.

Can it, though?

The proponents of this kind of “AI” say that “AI” art is generated “the same way that humans learn and make art: by copying”. At this point, it should be clear that this is so, so far removed from how humans learn and make art that it’s not even worth it getting into details. It’s why teaching kids requires some sort of specialization in most countries. Suffice it to say that things without self-awareness can not have subjective inner experiences, nor ways to externalize their inner experiences through expressiveness, which means there’s no artistic expression to be found, just a random assemble of (stolen) data they sift through while trying to match a human-made prompt.

Meanwhile, humans can express their inner experiences, artistically or not, in their own unique way, which is completely different from how generative “AI” software works. As well-put on the Aeon article:

Because neither ‘memory banks’ nor ‘representations’ of stimuli exist in the brain, and because all that is required for us to function in the world is for the brain to change in an orderly way as a result of our experiences, there is no reason to believe that any two of us are changed the same way by the same experience. If you and I attend the same concert, the changes that occur in my brain when I listen to Beethoven’s 5th will almost certainly be completely different from the changes that occur in your brain. Those changes, whatever they are, are built on the unique neural structure that already exists, each structure having developed over a lifetime of unique experiences.

This is why, as Sir Frederic Bartlett demonstrated in his book Remembering (1932), no two people will repeat a story they have heard the same way and why, over time, their recitations of the story will diverge more and more. No ‘copy’ of the story is ever made; rather, each individual, upon hearing the story, changes to some extent – enough so that when asked about the story later (in some cases, days, months or even years after Bartlett first read them the story) – they can re-experience hearing the story to some extent, although not very well (…)

This is inspirational, I suppose, because it means that each of us is truly unique, not just in our genetic makeup, but even in the way our brains change over time. It is also depressing, because it makes the task of the neuroscientist daunting almost beyond imagination. For any given experience, orderly change could involve a thousand neurons, a million neurons or even the entire brain, with the pattern of change different in every brain.

And people still have the gall to compare this to a machine chosing the most statistically probable result we would like to see, creating the most mid and soulless representation of whatever we already created in our minds.


The machine will never reach the “dream” of AGI

If “AI” could actually learn, it would be as precise as a calculator when doing math, which doesn’t happen. By feeding off the entirety of human knowledge, one would expect that ChatGPT would’ve learned basic math (if it had the “intelligence” to really “learn”) but, by asking it the the result of 478563453 * 4354353 (or any other large numbers), it will give an objectively wrong answer that can be easily checked by using a calculator (or doing the math by hand).

Because ChatGPT doesn’t know math.

Because it can’t know things, because it’s incapable of “learning”.

And that’s the crux of this technology: despite mimicking human communication — or human creation in general — its basis doesn’t rely on understanding information, just on absorbing and repeating.

This article does a good job at explaining why, using technical terms, but without diving too deep to make it impossible for laypeople to understand. As I also previously said, “neural networks” don’t copy the inner workings of the human brain, so software made using these don’t really “learn” logic, they just compare stored values. The article gives us a nice illustrative example of how this works:

To consider the relationship between generalization, explanation, and understanding, let us consider an example. Suppose we have two intelligent agents, AG1 and AG2, and suppose we ask both to evaluate the expression “3 * (5 + 2).” Let us also suppose that AG1 and AG2 are two different “kinds” of robots. AG1 was designed by McCarthy and his colleagues, and thus it follows the rationalist approach in that its intelligence was built in a top-down, model- (or theory-) driven manner, while AG2’s intelligence was arrived at in a bottom-up data-driven (i.e., machine learning) approach (see Figure 1). Presumably, then, AG2 “learned” how to compute the value of arithmetic expressions by processing many training examples. As such, one can assume that AG2 has “encoded” these patterns in the weights of some neural network. To answer the query, all AG2 has to do is find something “similar” to 3 * (5 + 2) that it has encountered in the massive amount of data it was trained on. For simplicity, you can think of the memory of AG2 (with supposedly billions of parameters/weights) as a “fuzzy” hashtable. Once something ‘similar’ is detected, AG2 is ready to reply with an answer (essentially, it will do a look-up and find the most “similar” training example). AG1, on the other hand, has no such history, but has a model of how addition and multiplication work (perhaps as some symbolic function) and it can thus call the relevant modules to “compute” the expression according to the formal specification of the addition and multiplication functions.

(…)

Here is a summary of the main limitations of the empirical AG2:

  1. AG2 does not “understand” how addition and multiplication work
  2. Because of (1) AG2 cannot explain how or why 3 * (5 + 2) evaluates to 21
  3. Because of (1) and (2), and while AG1 can, AG2 cannot (correctly and reliably) evaluate any expression that is out of distribution (OOD) of the data it has seen during “training”

Essentially, because the theory/model-driven AG1 “understands” the processes (i.e., because it “knows” how addition and multiplication work), it can explain its computation, while AG2 cannot and will always fail when asked to handle data that is out of distribution (OOD) of the examples it was trained on. Moreover, because AG1 “understands” how addition and multiplication work, it can generalize and compute the value of any arithmetic expression, because the logic of (add m n) and (mult m n) works for all m and for all n. Note, therefore, that to achieve understanding (and thus explanation and generalization) AG1 had to be a robot that can represent, manipulate, and reason with quantified symbolic variables. Such systems are not based on the “similarity” paradigm, but on object identity, while all AG2 can perform is object similarity (…)

LLMs and such don’t have reasoning. They don’t know logic. They just use stored values and do some statistical shenanigans to arrive at the most probable answer, and trying to make it explain how it arrived at that answer will only result in the same process being repeated, instead of its inner workings being exposed for scrutiny. It’s what’s been called a “black box”, where results can’t be predicted, requiring models to be “tuned” until they can more or less give the expected result.

The article also gives us another example, using linguistics, which is perfect for me because it shows why neural networks will never really supersede the quality of a properly trained human translator or writer.

Another problem, directly related to the above discussion on generalization, explanation, and understanding, is the out of distribution (OOD) problem and this also shows in problems related to true language understanding. One typical example is related to the linguistic phenomenon known by copredication which occurs when an entity (or a concept) is used in the same context to refer, at once, to several types of objects. As an example, consider the example below where the term “Barcelona” is used once to refer, simultaneously, to three different types of objects:

  • I was visiting Barcelona when it won over Real Madrid and when it was getting ready to vote for independence.

In the sentence above ‘Barcelona’ is used to refer, at once, to: (i) the city (geographic location) I was visiting; (ii) to the Barcelona football team (that won over Real Madrid); and (iii) to the voting population of the city. All tested LLMs wrongly inferred that the geographic location is what won over Real Madrid and that the geographic location was voting for independence. The point is that these LLMs will always fail in inferring the implicit reference to additional entities that are not explicitly stated. The reason these systems fail in such examples is that the possible copredications that one can make are in theory infinite (and you cannot chase Chomsky’s infinity!) and thus such information will always be out of distribution (OOD) (see [3] for more examples on why stochastic text generators like GPT will always fail “true” language understanding in copredication and other linguistic phenomena, regardless of their impressive generative capabilities).

As we can see, the machine fails at a very basic logical interpretation that every human is capable of naturally understanding. Because it doesn’t work like our brains, so it doesn’t have real “understanding”, it doesn’t know “meaning” or intentionality or figures of speech, even though it devoured tons of books, slangs, and explanations about these subjects.

Since this technology doesn’t really work with “logic”, it doesn’t matter how much material it’s fed, it will never become a true AGI, as so many techbros and investors hope. It will get better at finding the most probable answer to what people ask it to do, but it will never do it with the certainty of a trained human or a finely tuned machine or software designed for that specific activity.

As the article concludes:

As impressive as LLMs are, at least in their generative capabilities, these systems will never be able to perform high-level reasoning, the kind that is needed in deep language understanding, problem solving, and planning. The reason for the qualitative “never” in the preceding sentence is that the (theoretical and technical) limitations we alluded to are not a function of scale or the specifics of the model or the training corpus. The problems we outlined above that preclude explanation, understanding, and generalization are a function of the underlying architecture of DNNs (…)

And again, that’s because these things not only don’t mimic the human brain or the human learning experience, but are also incapable of properly “learning” or developing “logic”, because that’s not how they’ve been built. This is why the fears of an overlord AI taking over the world and whatnot make no sense: it literally can’t happen with this specific type of technology. What will happen (and is already happening) is easily impressed people making decisions about something they don’t understand and making life hard for everyone else as a consequence.


“AI does not exist but it will ruin everything anyway”

That is the title of this video, and it’s something I can completely relate to. I’m a translator, and everyone who works in my field — translation for creative industries — knows that machine translation and LLMs are pretty terrible at this job. But, it doesn’t matter how much we try to prove this to executives and CEOs, they keep pushing for it because “it will get better” (and I just argued why it won’t) or because it’s “good enough for people” (only if you never read a proper translation made by someone well-paid) and so on.

So, yeah, artificial intelligence does not exist, but it’s ruining everything anyway. In my field, it’s making the rates stagnate or go down while the amount of work required to do a good job doesn’t change. Because, oh yeah, fixing a machine translation is as much work (if not more) as translating something from scratch, except that now companies pay less than half of the usual translation rate because “it’s already translated, you just have to do some fixes”. Most of the time, it’s faster to rewrite whole sentences than edit bad grammar or typos out, and when the machine doesn’t get slang or turn of phrases, it does require a complete rewrite.

And hey, guess what? That’s how most of the text in creative industries is written, but since these machines don’t “understand”, can’t interpret “meaning”, don’t have cultural references, can’t produce metaphors, and so on, it will never get better.

But it gets worse.

As ChatGPT and the likes get more popular, people start to intrinsically trust their outputs because they really think these things are “AI”, that they have human levels of intelligence (or more) and a perfect, factual “memory”. None of this is true, of course, but the marketing and propaganda regarding their usage is so strong that makes the life of people dealing with actual factual information more difficult. Since last year, for example, I’ve seen people saying they have to buy books to find reliable factual references of artistic styles, as the slop machines now pollute internet searches, so looking for a “gothic cathedral” on Google, to study and learn its elements, has a pretty high chance of returning “AI”-generated images that look like real gothic cathedrals but have style incongruences and mistakes that a human artist or an architect or a historian wouldn’t make.

As expected, the political field now has been filled with half-truths too. As if fact-checking politicians wasn’t already hard enough, now disinformation campaigns can output industrial levels of fake news. As Brandolini’s law says: “The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it”, and it’s ever more clear when you read articles like this one from The Verge and see how much trouble the reporter went through just to disprove a simple claim, and yet it made almost no dent in its coverage.

In the end, nobody really benefits from this technology: truth becomes more troublesome to prove, creative jobs are replaced because the people who make the decisions don’t understand how they work, the well of knowledge that the internet (arguably) used to be became an infected pool full of generated slop, everyone asks for explanation or information from machines that make statistically-generated human-like text instead of opening up an encyclopedia or going to the library, and so on.

Humanity at large is at a net negative due to the widespread use of this tech and the heavy marketing that calls it “AI”.


The machine can’t make art and doesn’t help productivity

Humans need ways to express themselves and relate to each other. It’s part of the social bonding our species developed, and it allowed us to survive on this planet for so long against diseases and predators. Art isn’t just something pretty you stop to look at, but something that encapsulates a whole internal experience of someone else, given form in the outside world, allowing other people to relate to it at different levels. Yeah, you might be indifferent to this music or that painting, but other people won’t, and that’s how it always been for as long as humans exist. “Culture” isn’t a monolith, and that’s why it’s important to have a vast and diverse representation of people’s experiences.

Now, of course, people making art also need to capitalize on their work, after all nothing in this world is free. But to hone practical skills of expression to make other people’s internal experiences manifest in the real world (also known as “making art commercially or for other people”) is not something that can be automated. There’s a reason why artists working for someone else first make rough sketches before going for the real deal. This isn’t manufacturing, where every part of a machine or whatnot is standardized and made to scale. I talked briefly about it here, and it’s true: even in a capitalist society, products that rely on creativity can’t be scaled to offset their costs, but people who don’t understand how creative fields work are the ones making the decision. It’s easy to listen to a 3-minute song and think that it spontaneously appeared on the musician’s mind that way and they only had to record it, ignoring the (probable) weeks of mental effort and physical practice to have a full composition and the ability to play it without mistakes, and then say “hey, I bet they can make ten more songs like this in just a couple of days”.

The issue here is that art being made into a product (and seen only as a product and nothing else) leads people to think it’s possible to make artists be more productive with automation. This article talks about many of the “myths” regarding generative “AI”, but the most important one here is the “productivity myth”. As the article puts it:

The productivity myth suggests that anything we spend time on is up for automation — that any time we spend can and should be freed up for the sake of having even more time for other activities or pursuits — which can also be automated. The importance and value of thinking about our work and why we do it is waved away as a distraction. The goal of writing, this myth suggests, is filling a page rather than the process of thought that a completed page represents.

However…

The productivity myth sells AI products and should be assessed on its merits. Automation is often presented as a natural driver of productivity — but as MIT’s Daron Acemoglu and Boston University’s Pascual Restrepo have shown, it’s not universal. Some automation is mostly good at producing economic inequality, limiting benefits to the concentration of wealth. Meanwhile, an Upwork study has shown that “96% of C-suite leaders expect AI to boost worker productivity [while] 77% of employees report AI has increased their workload.”

And, sometimes, being “productive” is a far cry from the original intent of something:

Researchers Dagmar Monett and Bogdan Grigorescu describe a related myth as “the optimization fallacy,” the “thinking that optimizing complex processes and societies through their simplification and fragmentation is the best option for understanding and dealing with them.” We see this in the disturbing example of a father taking away his daughter’s agency in writing a letter as a beneficial use in the guise of freeing up time — a logic of “it can be done, and so it ought to be done” that misses the mark on what most people want to do with their time.

In the end, we can see how the Luddites were right when they “protested” the automation of their craft: they weren’t against new technological tools, they were against technology that would cheapen the value of their labor while making a worse product. And it’s the same pattern being repeated here, with the advent and heavy marketing of these “new” “AI” tools, where an activity that’s inherently human and can’t be performed by a machine is devalued based solely on its value as a product, without any regards to quality. As Brian Eno says here:

AI is always stunning at first encounter: one is amazed that something nonhuman can make something that seems so similar to what humans make. But it’s a little like Samuel Johnson’s comment about a dog walking on its hind legs: we are impressed not by the quality of the walking but by the fact it can walk that way at all. After a short time it rapidly goes from awesome to funny to slightly ridiculous—and then to grotesque. Does it not also matter that the walking dog has no intentionality—doesn’t “know” what it’s doing?

(…)

Now and again, something unexpected emerges. But even with that effort, why would a system whose primary programming is telling it to take the next most probable step produce surprising results? The surprise is primarily the speed and the volume, not the content.

And I completely agree. As a translator within the creative industry, and also a hobbyist writer and musician, all these aspects have been poisoned by the so-called “AI”. My daily life and my job have been made worse by them, and I’m tired of having to explain that to people. I do want tools that can make my life and my job easier, what I don’t want is things that create slop that I then have to sift through to please an investor somewhere while I’m being paid less than the worth of my expertise to fix the slop.

I’m not against new technology — hell, every translator I know is pretty happy using CAT Tools instead of a typewriter or MS Word — but I am wholly against things that remove my agency from my job and hand the decision to someone else who’s outside of their field of expertise. And every artist, linguist, writer, musician, and creative person I know is the same: nobody is asking for this, but it’s being pushed down our throats whether we want it or not.

Wanna know the worst part? I’ve been saying all of the above for years. I didn’t arrive at these conclusions because I just read these articles and thought “hell yeah”, but because I’m a professional who knows his job very well and could see the cost-benefit wasn’t worth it, but nobody listened. As researchers started pouring into the subject, it felt very gratifying to be validated by hard data and public perception. I’m not surfing on top of a hate-train, but I actually have to deal with this shit every other day.

Translators have been saying some variation of this ever since Google Translator became a thing and people started yelling that translation as a career “is over”. It’s been almost 20 years and they are still yelling the same thing.


Epilogue

There’s nothing much else I can say. I read tons of articles and research over the years, compared them to my own experience in creative and creative-adjacent fields, and the ones mentioned in this post are just the most recent. There’s so much more that could be talked about, like, for example, did you know that some people wanted to change the definition of “intelligence” so they could include LLMs in it? And guess what: they were all techbros, none from the fields that actually study what “intelligence” is. Or that the self-checkout Amazon grocery store actually relied on people in India watching tons of footage to make sure the machines were charging customers correctly? And it never worked as well as they claimed to, which goes to show how unreliable any unchecked large-scale application of neural networks can be.

In the end, I’d say that creative fields are safe because the so-called “AIs” will never be able to truly reproduce the human experience and the human creations. It’s all smoke and mirrors, a “mechanical Turk” if you will, always has been, trying to make naive people believe that company A or company B has a truly revolutionary tech that will “disrupt” the world as we know it. Sadly, a lot of people buy into this without analyzing what’s really going on. So yeah, while our jobs are safe, they are being devalued by people who never worked in the fields they want to replace with “AI”. But as long as they say it’s “artificial intelligence”, people will keep falling for it.

I really miss the times when computer engineers and other tech people saw a common, annoying problem and tried to actually solve it for the benefit of everyone else, instead of creating things no one asked for and making everyone else miserable. Is it too much to ask for tools that could help with my job instead of making it worse?

I wish people at least stopped citing CEOs as a reliable source of information.


Note: I won’t be debating any of the points mentioned above. I’m too tired of doing this already in my day job, where at one point I literally did the (unpaid) work of collecting samples of mistranslations that required whole sentences to be rewritten from scratch, to show what was the issue, just to be waived off as “you’re afraid of losing your job to the machine”. Comment if you’d like but, if you’re being an ass, I’ll block you.

On the other hand, I appreciate it if anyone wants to buy me some coffee, which fuelled me through this furious typing.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.