Are we taking artificial intelligence seriously enough?

Last edited: 26th April 2026

In humanity, we seek to further advance ourselves. The motives widely range, though it is obvious how we strive towards automation, convenience, and material to find pleasure in a life so complicated. It is for this reason that we have to ask ourselves to which extend our technology rules over us. Whenever a new way of doing something has been discovered, it is subject to immediate criticism. This is apparent from the way we were created in which everything new may be a threat to us. Nevertheless, as time went on, we incline towards this new thing than to reject it, so much so that we accept it and define it as "normal".

This does not have to be technology. It can also vary between different topics as new ideas come up with some being accepted and some being rejected. Over the course of the first thousand years, most of it has been relatively positive. A notable example would be the printing press. This changed around the time of the industrial revolution. Not to degrade it in spite of its many problems such as labour conditions, but the acceleration of faster ways to produce goods also led to advancements in science and technology by which the creation has brought consequences. This was apparent during the first world war, where these new technologies were used to murder somewhere between 15 million and 22 million people, still one of the deadliest conflicts in human history.

As time went on, even technologies deemed to result into greatness later made life a bit more complicated than that.

40 years ago, Reactor #4 of the Chernobyl Nuclear Power Plant exploded, igniting a nuclear catastrophe the likes of which we have never seen. It was a miscalculation of the abilities of the nuclear power plant in addition to failure by the staff due to severe labour pressure. Much of the approval of nuclear energy has severely declined as it illustrated how dangerous they can be as we humans are not infallible. Even if we try to, we may as well poison ourselves before we can create any benefits from it.

What did we all learn from this? Well, probably too little. We, as humans like to do, created two camps: One that opposes nuclear energy, stating its dangers on the environment could be worse than any other way of energy. These reactors require significant amounts of water, care, regulation, and secure bins to throw away radioactive waste. Supporters believe it is more renewable than other commons form of gaining energy, such as from gas, coal, and oil, and that its full potential, such as reactors utilising Throrium, have not been fully placed on the world stage of renewable energies. Nonetheless, we should always consider the risks of new technologies, and try to take small, but calculated steps. I may talk about this later, but unfortunately, we see far too little in new technologies as hype and money drive towards low safety measures, which can and will hurt ourselves.

Now, what does nuclear energy and artificial intelligence have in common? Why exactly mention this? The answer is, there is surprisingly a lot of crossover between these two technologies.

During a school lesson, we explored this topic I loathe: Artificial intelligence. I must sadly admit I will need more brainpower to invest time into this topic later down the road of my career; it seems as if nobody is interested in new technologies other than this one. As I speak (or, rather I should say, as I am writing), it is without a sprinkle of doubt that some """insanely mindblowing"" development of artificial intelligence, and the race against corrupt companies to create "AGI" (whatever the hell that buzzword actually means), will creep up. I will not cover any of them (sorry tech gooners!), but, since I must anyway for this part of school, I will talk about its nature mechanics.

I consider myself to be technologically conservative, although to label it as that is simply a bad omen. In political theory, there is this idea that all sets of ideas can be organised in a spectrum between progressive ideas (left) and conservative ideas (right), and that maybe there is something in between both (centre). However, there are also extreme means of it (far-left and far-right) as well as "I am mostly this, but I also take ideas from that" (centre-left and centre-right). In that sense, my technological (mind you, only technological sense) ideology sits somewhere at centre-right. Unfortunately, I personally prefer not to express this too loudly in such illustration, so take of that as you want. One important factor to consider is how I was born in a generation of modern technological advancements, making this political spectrum, as it is historically build on the position where political parties sat on, very much simplistic.

I use the terminal more than my web browser, I read man pages, I prefer locally installed applications over billions of websites that are simply just a calculator, and I reject much of "mainstream's preferred slop technologies" (bloatware, social media, "software for everything", "smart" technologies, fast food coupon apps); not because I fundamentally believe in conservatism (humility has not been conserved!!!), but rather because I realised the disadvantages and the general ugly of using certain services. Think about it: If people were able to date without a dating app, why would I need one, nevermind that they have been designed to keep you paying money with the perceived hope of actually finding a partner? At first, this sounds refutable, "times have changed", but while most people now will find their significant other online, it is save to assume this is not due to dating apps. Businessmen know this, so it is more about "how can we make lonely people stay on our scan?"

Why I am so far stretched with this topic is due to my dislike towards artificial intelligence. It sharply increased since 2022, and with it reaching mainstream normies, I have started to become a bit mental when somebody mentions artificial intelligence (to say at the very least). Seriously, anything that touches NPC's tongues almost always rots like King Louis XIV's mouth. It has become sickening to never hear about cool technologies in mainstream media but the one most mediocre of technologies. Artificial intelligence is complicated, and since mainstream almost always touches on dopamine-inducing content, virtually all details get stripped naked. Besides technology content (not magazines or tech news services), I have never heard even the most basic concepts about artificial intelligence come up such as computation, machine learning, and even large language models.

I noticed this in our school lessons as well. When our teacher presented a video of "martial art robots", no one, and I mean no one, even the teacher itself questioned whether anything of this is actually real or not. Everyone was larping about how "cooked" we are. Nobody was at one point asking themselves: "Could this also just be entirely staged?". At that point, I was not only shocked, but also disconnected with the entire course. How can we in school talk about reason, independent thinking, learning skills, and the idea of self-development while at the same time be this disillusioned from a freaking video with so many entry points of potential lies? I know school is fundamentally flawed, but I do take my school with a higher prestige than I used to. Nevertheless, it was one of the moments where I myself was reminded at how stupid all of humanity, including myself, can be.

For this reason, I struggle to both take artificial intelligence seriously while also being afraid of it. It is not the topic I have sleepless nights over, but it is one topic where I feel a negative connotation. Although with all the hype around, I can not take headlines seriously such as "We have already achieved AGI" or "New benchmarks reveals a 1000x increase in searching the best recipe for potato juice" or "New video generation allows for indistinguishable cat videos". At the same time, again, the shear amount of trust in artificial intelligence causes me to be greatly concerned. It has become more and more normal to ask ChatGPT to do your homework than it is to leave the house without an empty stomach. I also witness use of artificial intelligence where basically no artificial intelligence would have made the outcome better. For instance, when I am desperate about finding information but not knowing about its proper location, I have used artificial intelligence, only to be disappointed when the solution that was given to me did not work. I myself find this realisation to be shocking as hardly anyone shares a similar view of it. For this reason alone, it has made me rethink to which extend my personality or even my existential crisis are as a result of my perspectives on certain matters. Maybe I am wrong about everything I say here and maybe my hot takes landed me in no man's land...

I am beating a dead horse. Obviously, I tried chat prompts, the primary use of artificial intelligence, but with little to no satisfactory results. I got told it is because I am not a "prompt engineer", but okay, Apparently "AGI" can not even comprehend the most basic question with some sort of well enough detail such as "write a shell script for sway-bar". I mean what are we even talking about here? A superficial algorithm fetching search engine results or an actual artificial intelligence? Wait...

Indeed, this is more or less of how I perceive artificial intelligence, and where else to realise it than to use a regular search engine. Great! Layers upon layers of software layers! Welcome to 2026! Understand why I believe the technology of old is the new gold? Every tool from a past decade has only gotten better while rewrites or completely new ways of doing the same thing are still with error. Even then, I have my doubts that this pattern appears to be the direction with artificial intelligence. Meanwhile, while I am so condescending towards a piece of non-touchable technology, I was tasked with a different question: Do we all take this matter seriously? Are we so much to say between a hype train and a hate train that we do not realise the seriousness of this situation? With splits in society, so may conflict occur. Given that we might even create a new type of person, are we aware of the dangers and the absurdity of it?

It is, without a doubt, not an easy question to answer. Let me express this more formally.

AI, shortened as artificial intelligence, is a technological advancement in which computation turns into an intelligence akin to a mind of a human person. Since the 21st century, there has been a sharp increase in both computational development of machine learning as well as optimised software utilising extracted data from the results of said machine learning. As a result, since the early 2020s, consumer products have arrived that caused a general consensus of artificial intelligence reaching a state where it could deprecate sectors of human labour. Nevertheless, as a result of said consumer products such as OpenAI's Sora video generation tool, large amount of people in society have started to use artificial intelligence in a sarcastic sense, incidently undermining its dangers. At the same time, dependency of artificial intelligence technologies in general use cases has become the norm. This leads to the question to which extend its accelerated advancements and daily use is a cause for concern or an exaggeration of news headlines.

First of all, the awareness of the dangers of artificial intelligence has already reached popular culture to the extend of fearmongering. There has been a significant amount of prose, articles, essays, and media content depicting artificial intelligence and its uncanny consequences in the future. Since the general consensus has a clear vision on the implications of artificial intelligence, it is safe to assume the current society is aware of the shortcomings steming from the sharp increase of the advancements of artificial intelligence. This in return with artificial intelligence in popular culture also led to a significant amount of fearmongering. Fearmongering increases the awareness of the dangers of such developments because the human mind prefers to weigh negative outcomes far higher than positive outcome. In return, constant headlines of news implying the threats of artificial intelligence as technological advancements are reported, the viewers are understanding the need to care about the problems with artificial intelligence. In that sense, artificial intelligence is taken seriously enough, at least from publications concerning its future.

Secondly, public interests of the seriousness of artificial intelligence led to developmental changes. When chat prompts exported misinformation or were programmed to spread disinformation, public outrage resulted in both fixing vulnerabilities in these chat prompts but also authorities to investigate and regulate safety measures for companies to follow. When Grok, a chat bot, reportedly was able to nonconsensually modify images of minors in a sexual manner, widespread outrage increased reporting to the point where prosecutors internationally opened investigations. This reveals a worst case scenario, however, in which public interests directs the way of how artificial intelligence should be regulated. Since much of the developments occurs in democratic countries, a general setback of public approval of artificial intelligence allows for legislation to more easily be executed. Cause for concern can be dismissed by laws and regulations, thus proving that artificial intelligence is indeed taken seriously.

However, with the population being able to invoke influence, it is concerning how little is known about artificial intelligence. Many terms of the structure of artificial intelligence still appears to be uncommon whereas its developments and regulations require a broader knowledge of AI in order to specify changes in artificial intelligence. For instance, not many people know that a chat prompt at its core is simply a large language model that has been taught via machine learning to determine which word is appropriate to add to a reply of user input. As problems arise with attacks such as language manipulation and prompt injection, the authors of said large language models need to add patches or modifications in its parametres to prevent wrongful output. Due to a large proportion of people not being aware of these structural explanations, directing a change of what an artificial intelligence should not say, for instance, will be increasingly difficult since regulation require a clear starting point and an end when specifying limitations, otherwise loopholes in said laws will cause companies to avoid these regulations. With that in mind, it is quite difficult to determine how a regular person can take artificial intelligence seriously due to their limited scope of knowledge. A person can not take a problem seriously enough if they are unaware of it.

What is worse is because most these products today are also consumer products, and due to companies shipping such products inside regular consumer products, it is relatively common to simply accept it. As mentioned, artificial intelligence products get integrated into regular products without artificial intelligence (such as a messaging app) via updates, a lot of people either accept its invasive existence or try it out and use these technologies. For instance, the Microsoft Windows 11 operating system enforces updates to its customers. Updates may include bug fixes, feature changes, updated hardware drivers, or may also include additions of artificial intelligence. These additions to Windows 11 have been tightly integrated into other core software meaning it can not be removed from a system, but only be partially disabled. Subsequently, new artificial intelligence features are added, directly affecting its customers, without any ability to not update the system and not remove it, and because it is becoming increasingly more difficult to disable such features, they are passively enabled a user, albeit without much of a consent. This is notorious with a feature called "Recall" that periodically saved screenshots of user interaction with an artificial intelligence that allows to a user to search for any past activities on said computer. If artificial intelligence becomes tightly integrated into regular software, it is likely a majority who will simply accept additions of artificial intelligence into said software. In return, people become more accustomed to its mere existence, decreasing cause for concern. As artificial intelligence is already in use for a lot of regular users, this becomes apparent when one's life is inhibited to a lack of use of artificial intelligence.

This causes a dependency of artificial intelligence in an average life. As many people already use artificial intelligence in a multifunctional way such as for cooking, programming, calculation, and even for therapy, once a person's life is dynamically linked with usage of artificial intelligence, it will not stop from becoming easier to simply live without artificial intelligence. If these services were to shut down, since all current features of artificial intelligence depend on remote internet connection, it is without a doubt possible that people will struggle with basic chores and duties or depend on chat prompts to solve problems. An apparent sector is education. Much of school's content revolves around remembering facts or learn topics by heart. Artificial intelligence is from its core design great at replying with standard knowledge and facts such as historical events or mathematical calculations. For many students, this appears to be helpful to the point of undermining concerns of false information because the chat bots were previously good at pointing out information required for school. Thus, it is possible for the problems of artificial intelligence to appear less infringing than in reality.

In short, be it positive or negative, artificial intelligence needs to be taken seriously. To which extend depends on an individual's expertise and awareness. It is, nevertheless, a societal problem to be solved. While there are already measures taken and in spite of popular culture's depiction of artificial intelligence, formal lack of understanding as well as widespread adoption causing a dependency of artificial intelligence in regular life shows that artificial intelligence in its current state is still not taken seriously enough, from a personal perspective. More often than not, it is even ignored because it has already become the default of daily consumer products to include them. The potential harm of a technology this influential is in nowhere of most equations in people's minds.

On a personal note: Prior to the explosion of artificial intelligence in 2022, it was one topic I was curious about learning in the future was machine learning. That future is now unappealing.

I do not dislike new technologies, in fact, unlike some random conspiracy theorist using Slackware 14.2 and not updating a single piece of software unless it is fine I enjoy new technologies. I use iwd, not wpa_supplicant, I use elogind and dbus (although not uncommon), most of my applications are not very old. The problem is that, unlike technologies only nerds hate, artificial intelligence is on a whole new caliber of absurdity. People already lose hope, and yet not know what it actually means to have "artificial intelligence".

It is a state I call artelesis, a point where you look far into the future and, while at it, lose hope. In many instances of our lives, including mine, we seek towards trying to read the future. We always want the answers, but never the process. This usually is the point where we lose a lot of reason. However, with my knowledge about this topic, neither can I stand the artelesis nor the hype around artificial intelligence, and the ever-increasing integration into software that does not this crap.

Therefore, I have committed myself to avoid this technology like a plague, and to consider entirely deleting products I use that are fully accelerating into this madness. I do not feel comfortable knowing how we are using this many resources only for slight improvements. It is almost as if we forgot about environmental goals and any sort of ethics about cobalt. Linking back to what I started this blog post on, companies are trying to reopen nuclear power plants to balance out the high energy demands required by artificial intelligence. Also, because of this high demand, we are experiencing the death of consumer electronics with global memory supply crisis, accelerating the price of RAM and storage. I mean have we ever learned from our mistakes? All the implications stem from similar issues in the past, and yet we are seeing unimaginable amounts of the exact same mistakes. I seriously can not think straight anymore when the matters of artificial intelligence are present, and I am shocked at how much we boycott, and yet nobody boycotts artificial intelligence.

So, what is the real conclusion here? Well, like many things in your life, it is up to you to decide. I believe we should boycott artificial intelligence. Boycotting products of such, spelling artificial intelligence's acronym in lowercase, calling out governments across the world to regulate it, and spreading awareness of it might seem noble goals, which they are, but if enough people are willing to do that, we might see a switch in this mess.

Some people call it a "bubble" that will pop. If we take this metaphor as granted, how can you pop a bubble when nobody picks up the needle? Other people refer to it as a black hole, but then again, why are we accelerating our spaceship, the earth, into a black hole?