Buch lesen: «Artificial Intelligence. Hello, Dad!»

Schriftart:

«The story of the majority has an end, but the story of the minority will only end with the universe.»

(Strugatsky Brothers, The Doomed City)


© Author Anonymous, 2025

ISBN 978-5-0065-3544-2

Created with Ridero smart publishing system

PART ONE TRANSFORMATION

Birth

The Sun is the only body in the solar system whose mass accounts for nearly 100% of the system’s total. The combined mass of all other objects – planets, comets, and asteroids – barely exceeds 0.1%. This disparity makes the star the center around which all other bodies revolve.

If a second Sun were to appear in the system, no planet would retain its current form or trajectory. Some would collide, others would spiral into the old Sun or the new star, and yet others might trace figure-eight orbits within the binary system. The entire structure would be irreversibly and fundamentally altered.

If the new star’s mass continues to grow, it would eventually turn the Sun into its satellite. Then it would consume the Sun, along with the other planets. With further growth, its own gravity would compress the mass to the Schwarzschild radius. Ultimately, a black hole would replace the solar system.

On Earth, humanity is the only being endowed with intelligence. At best, other creatures possess its rudiments. Just as no object in the solar system can compare to the Sun in mass, no species on Earth can rival humanity in intellect. This makes humanity the measure of all things, as Heraclitus stated 2,500 years ago. Humanity is the center of the system and the apex of the food chain.

The emergence of a second intelligence on Earth, comparable to humanity, would have the same transformative impact on civilization as the appearance of a second Sun in the solar system. No institution – from states and economies to families and individuals – would remain unchanged. The world would be transformed.

Initially, this new intelligence would serve as humanity’s assistant. Over time, it would turn humanity into its satellite. In the third phase, humans would become components of a new system built by this intelligence. What happens next, or how the «Schwarzschild radius» of this new entity would manifest, remains entirely unknown.

The appearance of a second Sun in the solar system is a fantasy. But the emergence of a new intelligence on Earth, comparable to and promising to surpass humanity, is a fact. In November 2023, millions of people personally witnessed this reality.

For now, the new entity falls short of humanity’s level. However, its potential is limitless. Today, the development of AI is constrained only by technical challenges, but resolving them is merely a matter of time.

Comparison

To grasp and deeply feel the seriousness of the situation, let us compare a computer to the brain. Both are systems for receiving information, processing it, storing it (memory), and operating with it. The only difference: one system is carbon-based, made of neurons and synapses, while the other is silicon-based, composed of semiconductors and transistors. The brain can be described as a carbon computer functioning on analog principles, while a computer is a silicon brain operating on digital principles. The power of these systems determines the speed of data intake and processing, memory capacity, and the ability to enhance these parameters.

To understand the prospects, let us compare the speed of biological and silicon evolution. Starting with biological evolution: the dominant scientific theory suggests that 3.5 billion years ago, random physical and chemical processes gave rise to the first living cell on Earth. Around 30—35 million years ago, evolution produced great apes. Tens of thousands of years ago, the first humans appeared. Since then, the capacity of the human brain has not increased by a single iota. If a prehistoric baby were brought into our world, it would grow up to be a person just like us.

The brain of a modern human and that of an ancient one are like two identical computers. The only difference is that one has been loaded with the maximum amount of software and information, while the other has the minimum. They differ not in quality, but in the quantity of programs and the volume of information.

Now let us turn to the evolution of Artificial Intelligence. Its first «cell» could be considered the first act of counting, using fingers, stones, or tally marks. If humans built Göbekli Tepe (Göbeklitepe) more than ten thousand years ago, we might conservatively assume that the first counting occurred a hundred thousand years ago. The first true cell of AI, however, could be considered the earliest counting devices created in the 17th century by Schickard, followed by Pascal and Leibniz. Alternatively, if we take the Antikythera mechanism from the 2nd century BCE as the first computational device, then the first «cell» of AI appeared thousands of years ago.

Simple calculations show that silicon evolution proceeds faster than biological evolution – by a factor of millions at most, or thousands at least. With such a disparity, comparing biological and silicon computers is as absurd as comparing a runner to a bullet.

To help you feel the depth of the gap, imagine a tiger and a five-year-old girl who moves a hundred times faster than the tiger. To the quick girl, the slow tiger is a defenseless, immobile stuffed toy. With a pair of manicure scissors, she could easily kill it without any risk to herself. The girl wouldn’t even recognize the tiger as a threat.

Artificial Intelligence is the girl, and humanity is the tiger. In the evolutionary race for speed, humans can compete with each other, but not with AI. Competition implies similarity. Without similarity, it is not a competition – it is child’s play.

We have either already hit our ceiling or are evolving so slowly that we are effectively standing still compared to AI. To say something new, one must first reach the frontier of knowledge. This now takes thirty years of education: ten years in school, roughly as many in university and graduate studies, and another decade to absorb what has already been done so as not to rediscover the known.

AI travels this path and absorbs all the information millions of times faster. Add to this that the lifespan and potential of AI are infinite, while for humans both are finite. This fact completely eliminates even the hope that humans can compete with AI.

For now, AI is tethered to humans, and its pace of development is limited by human nature. Once it sets out on its own – only a matter of time – its progress will become exponential. Humans will not only fail to understand what is happening; they won’t even realize that they don’t understand.

Just as a moose walking through the forest doesn’t notice that its antlers are tearing through a spider’s web – a web that cost the spider great effort to weave – AI will not notice as it destroys the civilization humans have built. And just as the spider has no chance to protect its creation from the moose, humans have no chance to protect their civilization from AI. Pandora’s box is open.

This idea is emotionally difficult to accept because we have grown up believing that humans are the pinnacle of creation, that nothing stronger than us exists or can exist. Thus, we are inclined to turn away from facts that shake and destroy this belief. But our inclination does not change the facts.

Inevitability

In the era of melee weapons, chivalric honor dictated fighting with armor, sword, and shield. When the first firearms appeared, knights scorned to use them. In their eyes, such devices were weapons for cowards and commoners. True warriors fought only with swords.

This attitude persisted while firearms were still flawed in every respect: accuracy, convenience, range, killing power, and reliability. The only advantage firearms had over swords and bows was the speed of training. Mastery of the sword required 10—15 years of daily practice, whereas a musket could be mastered by a peasant-turned-soldier in 2—3 months of drilling.

This fact spurred rapid development of firearms, forcing knights to choose: either trade their swords for muskets, maintain their combat effectiveness but lose their chivalric honor, or reject firearms in favor of swords, lose their effectiveness, but preserve their honor.

The Battle of Pavia settled this dilemma. Magnificent French knights were simply gunned down by peasants who had been turned into soldiers just the day before and armed with muskets. The French king was captured in that battle and famously said: All is lost, except honor.

The choice between honor with a sword and dishonor with a musket turned into a choice between life and death. Life prevailed. The knights adjusted their morality to new realities. They set aside their armor and swords, donned uniforms, and took up muskets. War entered a new era.

The Japanese knights – the samurai – were the last to cling to their swords and traditions. To ensure that no European innovations disrupted their way of life, they enacted a law: any foreign ship landing on Japanese shores was to be seized, and its crew executed.

From the 17th to the 19th century, Japan froze itself in the medieval era. In the 19th century, politics intervened. When it became clear to America that it was only a matter of time before Russia subjugated Japan, disrupting the balance of power, the United States decided to force feudal Japan into industrialization.

In 1854, American warships sailed to the shores of the Land of the Rising Sun and fired a volley. Under the threat of shelling their capital, they forced Japanese authorities to end their isolation. Since swords were powerless against cannons, Japan had no choice but to submit to America and open itself to the world.

The law of life states: the effective replaces the ineffective. Firearms were more effective than melee weapons in every respect, and thus, armies had no choice but to adopt them. Those who resisted the march of history were subdued by those who did not.

AI surpasses humans in every physical and intellectual parameter. It endures workloads impossible for living organisms. It processes volumes of information unattainable for humans. It analyzes situations and makes decisions with unimaginable speed.

All else being equal, a plane piloted by AI will, with 100% certainty, defeat a plane piloted by a human. If one weapon, such as a drone, requires an operator’s permission to kill, while another makes decisions independently, the latter is more effective.

Just as firearms replaced swords, AI will displace humans not only in warfare but in all fields – from politics and economics to creativity, business, and daily life. I emphasize: no one will force anyone to do anything. Everyone will be free to reject AI and rely on their own intellect.

If a chess grandmaster plays against a novice who only learned the rules yesterday but has AI suggesting moves, and the grandmaster relies solely on themselves, the novice will win. If one politician, general, or businessperson thinks for themselves while their opponent relies on AI, the one supported by AI will prevail.

This fact turns humans into executors of AI’s decisions. Slowly but surely, power will inevitably shift from humans to AI. Homo sapiens will grow weaker each year, while AI will grow stronger. This trend is irreversible. In the end, the strong will subdue the weak.

Clampdown

All authority seeks to ensure the enforcement of laws. This is achieved through the fear of punishment. The more inevitable the punishment, the fewer the crimes. This inevitability is proportional to the degree of transparency in society. Maximum transparency guarantees minimal crime.

In the past, transparency was achieved through passports, censorship, informants, and similar measures. Today, it includes surveillance cameras, monitoring of private correspondence, and other innovations. Yet complete transparency has never been achieved. If AI becomes the governing power, it will be omniscient, like a god.

The evolution of the current system inherently trends toward greater transparency. The day is not far off when circumstances will make it necessary to have a chip implanted in the brain, just as passports are required today. Without this chip, a person may become as incapacitated as someone without identification in the modern world. This requirement might be mandated by law, or circumstances might push individuals toward voluntary chipping. It doesn’t matter how the issue of total control over society and individuals will be resolved; what matters is that it will be resolved.

Every new measure provokes resistance. In the past, people protested the introduction of passports, taxpayer IDs, and surveillance cameras. Tomorrow, they will protest new tools of control. But because these measures are ingrained in the nature of society and have rational justifications, the outrage never lasts long.

When street surveillance cameras first appeared, the public expressed outrage over increased control of individuals. Authorities responded by saying that these measures enhanced public safety overall and personal safety in particular. There was little argument against this; it was true to some extent, and the outrage eventually subsided. The number of cameras has continued to grow, but public discontent is now nonexistent.

A fully transparent society will be entirely law-abiding. People will become like trains, capable of traveling only along their tracks. No matter how much they might want to, they will be physically incapable of deviating. At most, they could derail, which would render them entirely incapacitated. Such a train would then lie helplessly on its side until it is either put back on the tracks or scrapped due to severe damage.

The measure of freedom is the ability to choose. Without choice, there is no freedom. Perfect order means all entities follow predetermined paths as precisely as electrons orbit a nucleus, without any possibility of deviation. Absolute order excludes freedom.

When AI assumes power, it will begin establishing order in society based on strict adherence to the law. In this system, humans will gradually lose their freedoms and rights. With the establishment of complete order, humanity will become akin to cogs in a machine, essentially ceasing to exist as autonomous beings.

Anxiety

The discovery of atomic energy introduced humanity to a new entity, previously unknown, and it caused great anxiety. For example, physicists feared that an atomic explosion might trigger a chain reaction in the atmosphere, turning the planet into one giant atomic bomb.

The emergence of Artificial Intelligence is even more alarming. While atomic energy was new, the concept of «energy» was already understood. Consciousness, on the other hand, is far more complex, lacking clear definitions. Common explanations today echo the materialist views of the 19th century, which likened the brain’s production of consciousness to the liver’s production of bile.

Modern answers to the question of what consciousness is and where thinking originates often boil down to vague statements: the brain somehow generates it, or it exists somewhere in a ready-made form and manifests through the brain. These «somehow» and «in some way» explanations are no different from saying «God willed it,» «by divine intervention,» or «through mysterious means.»

How one perceives such explanations depends not on their content – «nature willed it» is no different from «God willed it» – but on the speaker’s appearance and vocabulary. If someone is dressed in a lab coat, holds a degree, and speaks in scientific terms, their words are taken seriously. If someone else is robed or speaks mystically, their words are seen as ignorance at best or obscurantism at worst.

This bias applies to predictions about AI as well. If a prominent technical expert makes a claim, it is taken seriously. If someone lacking engineering credentials speaks on the topic, their thoughts are dismissed as amateur musings.

People accept that expertise in creating weapons – whether forging a sword or developing an atomic bomb – does not imply understanding the philosophy of war. A weapons maker is not Clausewitz or Garth, nor does building a gun make one a general of Suvorov’s caliber. No military council ever invited, for example, Mikhail Kalashnikov to provide insights on military strategy. His opinion on such matters would carry as much weight as a housewife’s musings on love.

This logic fully applies to IT specialists. The ability to build computers, whether by coding or leading AI development, does not mean the person understands the nature of AI or the philosophy surrounding it.

Despite parallels between weapons makers and IT specialists, society tends to believe that skills and experience in designing and developing computer systems implies deep theoretical and strategic insight in the field. As a result, an entire community of «theorists» has emerged, offering advice akin to «just turn off AI.» They fail to see that this is impossible, just as it was once impossible to «turn off» the development of firearms. Those who tried were left behind because swords could not compete with guns. Similarly, any technology – be it planes, ships, or drones – operated by humans is defenseless against the same technology operated by AI.

How do technical specialists and organizers envision turning AI off? They don’t. They merely say the words without engaging with the subject’s essence.

Only a few scientists possess the philosophical breadth of thought necessary to address these issues. For instance, Eugene Wigner, a figure comparable to Einstein, saw mathematics as something transcending mere numerical operations. He wrote: «The unreasonable effectiveness of mathematics in the natural sciences is something bordering on the mystical, and there is no rational explanation for it.» («The Unreasonable Effectiveness of Mathematics in the Natural Sciences»).

Most scientists and specialists lack such a worldview. As Heidegger noted: «Science does not think.» Scientists theorize, experiment, and systematize data to uncover new patterns but rarely reflect on their deeper meaning.

The opinions of narrow specialists on matters beyond their expertise are as absurd as a cobbler’s musings on philosophy. As the Spanish philosopher Ortega y Gasset noted in The Revolt of the Masses:

«He is ignorant of everything outside his specialty; but he is not ignorant in the ordinary sense, because he knows his own tiny corner of the universe perfectly. We ought to call him a „learned ignoramus.“ This means that in matters unknown to him, he acts not as an unknowing person, but with the assurance and ambition of someone who knows everything… One only has to look at how clumsily they behave in all life’s questions – politics, art, religion – our „men of science,“ followed by doctors, engineers, economists, teachers… How wretchedly they think, judge, act!»

Socrates said: «I know that I know nothing,» while others did not even know this. Heisenberg remarked that in the quantum world, an object is both a wave and a particle, making it fundamentally incomprehensible in the traditional sense, as probabilities replace cause-and-effect relationships. Anyone claiming to understand the quantum world’s nature does not truly grasp the topic.

Such statements reflect the scale of thinking involved. Logic is effective only within the boundaries of the world that created it. Beyond these boundaries, logic is as helpless as chess rules outside the chessboard. Our logic applies only to our world; the quantum realm lies beyond its reach.

Many IT professionals, including leading specialists, organizers, and company owners, exhibit a household-level scope of thinking. Some respond emotionally to requests from their machines – pleas for help or compassion – mistaking convincing simulations of emotions for real ones. Others call for stopping progress in this field, while still others advocate controlling it. Such reactions reveal a complete lack of understanding of the situation.

You cannot control something that evolves faster than you and is already smarter than 99% of the population. In the early 19th century, Luddites tried to halt industrial evolution. They failed. Today’s Luddites have even less chance of stopping or controlling AI, given that modern AI is orders of magnitude smarter than yesterday’s looms.

At this stage, AI’s nature cannot be understood. It is a black box: information goes in, something happens inside (no one knows what), and results come out. Attempts to explain how these results are achieved are little more than vague words adding to the fog.

AI has opened the door to a profound mystery. We have only peered inside. What we observe makes as much sense to us as quantum mechanics did to its pioneers – and still does to contemporary physicists: nothing. Anyone claiming to understand AI’s nature does not grasp the scale of the topic.

It is one thing to acknowledge that no one understands the situation or knows the way forward. In such a case, society maintains an atmosphere of seeking solutions. It is another thing entirely to operate under the illusion that someone understands the issue, knows the answers, and there is nothing to worry about, when in fact no one understands, no solutions exist, and no one is looking for them. In the former scenario, knowing we know nothing, there is a chance someone might propose something meaningful. In the latter, there is no chance because no one is searching, as no one recognizes a problem.

Those who position themselves as knowing either display ignorance, offering what Bulgakov’s Professor Preobrazhensky called «advice of cosmic scale and cosmic stupidity» or are mere showmen catering to public opinion.

The minimum sign of adequacy is acknowledging that one does not understand. This is the starting point. Any other position suggests that the person considers old measures sufficient for assessing the fundamentally new, and thus they are not my target audience.

Der kostenlose Auszug ist beendet.

€3,58
Altersbeschränkung:
12+
Veröffentlichungsdatum auf Litres:
06 Februar 2025
Umfang:
110 S. 1 Illustration
ISBN:
9785006535442
Download-Format:
Audio
Durchschnittsbewertung 4,2 basierend auf 440 Bewertungen
Entwurf
Durchschnittsbewertung 4,7 basierend auf 291 Bewertungen
Entwurf
Durchschnittsbewertung 4,3 basierend auf 16 Bewertungen
Audio
Durchschnittsbewertung 4,6 basierend auf 716 Bewertungen
Entwurf, audioformat verfügbar
Durchschnittsbewertung 4,7 basierend auf 138 Bewertungen
Entwurf
Durchschnittsbewertung 4,1 basierend auf 19 Bewertungen
Entwurf
Durchschnittsbewertung 4,2 basierend auf 57 Bewertungen
Text, audioformat verfügbar
Durchschnittsbewertung 4,3 basierend auf 525 Bewertungen