Artificial Intelligence Doesn’t Steal, It Disturbs

In Culture Monday, 01/09/2025

Toni Calderón

Toni Calderón

Profile

In recent years, artificial intelligence has become a point of friction between two worlds that rarely engage in calm dialogue: art and science. Where computer scientists, architects, engineers, physicists, mathematicians, chemists, and doctors welcome it as an essential tool for expanding their analytical capabilities and making more accurate decisions, many artists perceive it as an unfair competitor, an imitator trained on their works and, ultimately, a silent thief that appropriates ideas and styles without permission. Behind this accusation lies a narrative that “AI steals,” which resonates because of its emotional charge, but which falters when analyzed.

Stealing implies taking something away, leaving its owner without it, and exploiting the copy as a substitute. AI training does not meet any of these criteria. It does not store works, it does not keep them for use, it does not take anyone’s original away. What it does is analyze large volumes of data to learn statistical patterns. The resulting model is not a chest full of paintings and texts, but compressed know-how, a network of abstract relationships that are then deployed in new combinations.

It is important to understand how this training works. An AI is exposed to massive amounts of examples, texts, images, or sounds, and at each step it tries to predict the correct continuation, the next word, the next pixel, the next note. If it gets it right, it reinforces its connections; if it fails, it adjusts them. Millions of repetitions mean that the model does not remember specific works, but internalizes regularities that define the most likely patterns. It is a learning process analogous to that of a student reading books in a library. They do not memorize page by page, but absorb styles, grammar, and construction logic. Can AI memorize a fragment of a work? Yes, but these are anomalies that are corrected with filters, dataset adjustments, and regularization techniques.

AI

When we talk about “training AI” in general terms, we are referring to exposing a model, a network of interconnected mathematical parameters, to large amounts of examples. Each time the model attempts to predict the next word, pixel, or number, it compares its response with the correct one and measures the error. That error is fed back in the form of small adjustments to the internal parameters, following a procedure called backpropagation, which in turn relies on optimization algorithms such as gradient descent. Repeated millions of times, this process causes the model to minimize the overall error and become capable of capturing regularities.

More specifically, when someone says they have “trained an AI” for a specific application, such as recognizing defects in factory parts or classifying medical images, they are usually referring to a process of fine-tuning a model that has been previously trained with general data. This fine-tuning consists of showing the model specific examples relevant to the desired task, so that its parameters are reoriented toward a particular domain. Initial training provides the AI with a broad knowledge of universal patterns, language, vision, and sound, while fine-tuning refines that ability in a limited field. Technically, this may involve recalibrating all layers of the network or only the last ones, depending on the strategy. Thus, “training” does not mean that the machine learns like a human being, but rather that it adjusts millions or billions of mathematical data points until its predictions align with the expected behavior.

Artists are concerned because AI can imitate styles. But a style is not a specific work; it is an open field of formal solutions, such as the use of chiaroscuro in Baroque painting or minimalist aesthetics in musical composition. Learning from styles has always been part of the human creative process; no painter who studies Caravaggio, no band inspired by The Velvet Underground is stealing in the strict sense. AI does something similar, albeit statistically. The only thing that is protected, both legally and conceptually, is unique expression. If an output substantially reproduces an existing work, we are dealing with a specific failure, not the definition of the entire technology on which AI is based.

More complex is observing the discourse that spreads in talk shows, conferences, and interviews, where some, with little technical training, raise the flag of theft as if it were a dogma. They repeat it without nuance, like a slogan, conditioning an entire group through a narrative that is easy to digest but empty in its depth. This is particularly strong in fields such as illustration, comics, and other artistic disciplines, where, with a few honorable exceptions, the denunciation of AI seems more like a gesture of notoriety than a well-founded reflection. The defense of a supposed freedom is proclaimed when, in fact, what is being fed is a distrust of progress, an attitude that, rather than protecting, ends up hindering. The artistic tradition has always been critical of the new, and that tension is necessary, but only when argued rigorously. Turning dissent into the norm, without solid reasoning, leads to sterile tedium that wears down conversation and impoverishes debate.

AI

Graffiti by Elías Taño. Photo: Toni Calderón,

Another source of fear is that AI democratizes access to aesthetic production. If anyone can easily generate images, music, or texts, the exclusivity of the craft is threatened. But democratization is not theft; it is broadening the base of participation. Art history is full of examples: photography competed with academic painting, and sampling competed with traditional music. Each breakthrough makes certain practices cheaper, but at the same time pushes creators to revalue what machines cannot do: personal voice, lived experience, social context, performance, or staging.

Why do scientists embrace AI as a tool while many artists reject it as a threat? Because they start from different categories. In science, data is information for decision-making, not works of authorship. The value lies in its effectiveness. In art, the work is identity and sustenance. Using it to train a machine feels like expropriation. Both narratives are understandable, but neither exhausts reality. AI, in itself, neither steals nor emancipates; it is an instrument. Its ethics depend on how we manage data, licenses, and profits.

Fear feeds on misunderstandings: “My work is in the dataset, they have used it without permission,” “it imitates my style,” “it devalues my work,” “learning from works without paying is wrong.” However, behind the veil of these legitimate misunderstandings lies a deeper and more valid structural critique: the political economy of datasets. The problem is not that AI “steals,” but that a small group of private entities has digitally fenced off the global cognitive commons, a vast territory ranging from mathematical equations and scientific articles to novels, symphonies, and paintings, turning it into free raw material to feed closed and highly lucrative business models. It is the capitalist exploitation of collective intelligence, in its broadest sense, taken to an industrial and algorithmic scale.

The debate should never be about the tool, but about the regime that governs it.

The solution, therefore, cannot fall into the trap of replicating old hierarchies and seeking to compensate only some groups (artists) and not others (scientists, mathematicians), as they have all contributed equally to this common heritage. The real demand must be more radical: transparency, open source, and common benefit. We must demand auditing mechanisms for training data, the release of base models as public infrastructure, as is done with internet protocols, and the creation of basic income funds or investment in public research financed by taxes on the extraordinary profits of these corporations. The debate is not about whether illustrators or physicists deserve royalties, but about how we prevent an oligopoly from appropriating the intellectual heritage of all humanity for private gain. The question is political: do we want artificial intelligence to be a common good that amplifies our collective capacity or the ultimate instrument of cognitive capital concentration?

Faced with each objection, AI responds not with expropriation, but with learning. What it does demand is cultural responsibility, recognizing that without creators there would be no material to learn and, therefore, designing fair forms of integration, transparency, and compensation. To coexist with AI, it is necessary to use responsible datasets, document processes, avoid opportunistic style substitution, position AI as an assistant, and create value mechanisms that recognize all parties. It is also advisable to promote quality over quantity so as not to saturate the cultural ecosystem. The phrase “AI steals” is more a reflection of anxiety than a technical description.

The debate should never be about the tool, but about the regime that governs it. Faced with the dystopia of privatized intelligence that curtails the cultural commons, the answer is not prohibition but collective appropriation. The true act of creation in this century no longer consists of generating a new work, but rather of establishing structures that ensure that artificial intelligence serves equity and common development, rather than the oligarchic concentration of power and income.

Suscríbete a nuestra newsletter

* indicates required

Share:

AIartificial intelligence

Related posts

Comments

You have to be login to leave a comments.

No comments

No one has posted any comments yet. Be the first person!

Revista cultural el Hype
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.