Essays about game development, thinking and books

AI notes 2024: Prognosis en ru

I continue my notes on AI at the end of 2024.

In the previous posts we discussed three theses:

  • By analyzing the decisions of major AI developers, such as OpenAI or Google, we can make fairly accurate assumptions about the state of the AI industry.
  • All current progress is based on a single base technology — generative knowledge bases, which are large probabilistic models.
  • The development of neural networks, a.k.a. generative knowledge bases, is reaching a plateau; their further advancement is likely to be incremental/evolutionary rather than explosive/revolutionary.

Based on these theses, we can finally talk about the most hyped, most exciting topic: how long will the current progress continue? Will we reach the singularity in 2025, or will everything remain the same? When will our god of metal and silicon finally appear?

In 2023, I already published a forecast on artificial intelligence [ru]. It is still valid — take a look. In it, I spent more time describing what to expect. This text is more about what not to expect. So, it turns out that I outlined two boundaries of the possible, and the truth should be somewhere in between.

There will be no technological singularity

Once again, the singularity will not come to us.

A strong AI (AGI) will not appear in 2025, solve the problem of global warming in 2026, cure cancer in 2027, introduce universal basic income in 2028, establish equality, and distribute free steaks made from artificial meat in 2029, and initiate tourist flights to Mars.

We should remember that the singularity is not a specific event or phenomenon, but an abstraction to denote a state of the world in which our current models of reality do not work.

In simple terms: nobody knows what will happen under certain boundary conditions, so our predictions push the numbers to infinity. In math or physics, this happens literally; in real life, it's more figurative — we assume everything will either turn out as perfectly as possible or as disastrously as possible.

In more complex terms:

  • We think strictly within the boundaries of the models of reality in our heads.
  • All models have a limited range of application and limited accuracy [ru].
  • This means there are always areas where we make bad, inaccurate predictions, or can't make them at all.
  • Often these are areas where we end up when the input parameters of the model change abruptly, too rapidly. Because our models are usually not designed for such cases: too rare situation, more difficult to prepare and tune, etc.
  • The expectation of a technological singularity is precisely such a case. One or more technologies begin to improve so rapidly that the consequences of such improvements exceed the capabilities of our models faster than we can adapt them. As a result, it becomes more practical to wait and see what happens rather than rely on inaccurate predictions."

In such conditions, some people start to expect paradise on Earth, reasoning that since we don’t know exactly what new technologies will solve, they’ll solve everything. This mistake is understandable, but unfortunately, many exploit it to boost their personal wealth or popularity.

However, rapid changes always come to an end. Humanity gathers new data, revises its models, and starts making accurate predictions again. Everyone calms down—until the next leap or sudden change, which, this time, is sure to lead to the technological singularity.

Neural networks won't change conceptually over the next 10 years

By conceptual changes, I mean obtaining new qualitative properties in addition to the existing ones. For details, I invite you to read the post on generative knowledge bases.

By non-conceptual changes, I mean quantitative changes, like increasing speed, quality, or reducing costs.

I limit my forecast to the next 10 years because there is still theoretical room for a qualitative change in neural networks. For instance, current artificial networks separate training from their operational mode, whereas biological networks combine these modes at least partially. Adding such a capability to artificial networks could once again disrupt the market.

I believe such changes are unlikely (in the near future) for the following reasons:

  • The current breakthrough in neural networks has expanded the space of potentially profitable products for businesses by orders of magnitude.
  • Business is the main driver of current progress.
  • Exploring the space of potentially profitable products is much more beneficial (in terms of the profit-to-risk ratio) than looking for another similar space. While you're looking, your competitors are dividing the current market.
  • There is little hope for scientists either, as modern science is broken, and most researchers are driven by hype and money.

Speaking figuratively, artificial neural networks are currently at the stage of the PC market in the late 80s: the internet already exists, all the fundamental PC concepts are in place, and the basic production chains are either established or in the process of being established. Yes, many things will change over the next 30 years, many processes will be optimized, and the development of electronics will trigger numerous smaller revolutions. However, conceptually, the impact of PCs on our world will be more about quantitative scaling than qualitative transformation. To exaggerate, text editors already existed in the late 80s, and conceptually, they haven’t changed since — only becoming more polished, user-friendly, and powerful.

We won't build strong AI based on just one or a few neural networks

Let's not try to strictly define what strong AI is — let's leave that to philosophers, mathematicians, and lawyers. After all, not long ago, Microsoft and OpenAI defined strong AI in terms of the amount of money it generates.

We are simple folks and intuitively understand strong AI as something roughly similar to a human. Weak AI, on the other hand, might resemble a trained dog or cat, or a model that, at first glance, appears to be strong AI but doesn’t function like it—a sort of strong AI phantom.

Let me add a few words about the phantom of strong AI.

With infinite resources, we could create something that behaves like a human to an unprepared observer but is not human at all. For instance, we could gather a crowd of people to list "human" reactions to all possible events, record them in a database, and select reactions based on a lookup table. Such a system would behave (from third-party view) like strong AI until it encounters an original situation or needs to learn (adapt to) something new. Or we could train a huge statistical model to predict text token-by-token, simulating a dialogue with a human, while remaining a static generative knowledge base.

Obviously, while such a system might look like strong AI, it won't be strong AI by any means. This is why I'm extremely skeptical and disappointed by claims — usually made by former employees of Google, OpenAI, and other companies — that consciousness has been discovered in modern models.

The question of creating strong AI is a complex one and requires a separate post to delve into. Unfortunately, I don't have time to write it, although I would like to. So I'll limit myself to a few theses supporting the idea that creating strong AI through simple means is highly unlikely.

The thesis from the perspective of technology.

If we view neural networks as generative knowledge bases, it becomes clear that no database can give us strong AI, as it implements only part of the necessary functionality (stores information). Similarly, relational databases and semantic networks did not lead us to a strong AI based on expert systems in the 80s and 90s.

The thesis from the perspective of the target system, which we have at our disposal.

Our brain clearly doesn’t function as a single universal network or just a few generalized networks/components. Opinions differ on how many components can be identified in the brain, but there are undoubtedly many of them (several dozen structures can be identified), or even more, or simply an enormous number (if we consider things like cortical columns as separate modules). These modules are not only structured differently but are also organized into a complex architecture requiring a sophisticated communication infrastructure (in technical terms).

Our entire progress in strong AI over the past 50 years has been about creating a few modules that simulate its fragments or abstract functions. However, no one yet knows how to assemble them or what to use to glue them together. Recently, we've gained another building block in our hands, but constructing a structure from such blocks will require much more research and development. And this structure will be far more complex than just a pile of uniform cubes.

Neural networks on their own won’t take jobs away

I wrote a bit more on this topic in a separate essay: AI will not(?) replace us all [ru].

Since we don't expect strong AI, neural networks remain a (very powerful) tool for automating intellectual work. Perhaps the most powerful tool after writing, but still a tool. Such a tool still needs a guiding hand, even if it guides it very abstractly.

In reality, as far as I remember, no automation revolution has ever reduced the absolute number of jobs — only redistributed them. Business, in principle, is not interested in reducing production, only in increasing it: if something can be automated/optimized/made cheaper, it's a reason to scale up, and scaling up requires the (retrained) workers.

The new tool will shift the focus of professions from more routine intellectual tasks to less routine ones. This will require people to:

  • acquire a broader and deeper education, as increased abstraction means a broader and deeper area of responsibility;
  • retrain, as the tool is new and anything new requires learning;
  • cultivate basic curiosity to recognize when it’s time to start learning.

People, who don't want to learn, will encounter problems with work. But such people have always had problems in this area for the past 100 years — it's not a consequence of one more tool being introduced.

There’s a view that progress in AI will enable people to perform work for which they previously lacked the necessary competencies. I don't share this opinion. Such a situation may arise (and will be observed) as a temporary phenomenon or a deviation from the norm, since the presence of a more intelligent tool (like a neural network) makes its less capable counterpart (an incompetent user) unnecessary.

For example, when the first cars appeared, the laws of some countries required a person to walk or run in front of the vehicle to warn others of approaching transport. Of course, this practice faded away on its own. The same will happen with such "AI counterparts."

In reality, the real difficulties will arise for several categories of people.

First of all, for people with intellectual disabilities:

  • There may be less trivial work available.
  • Work hierarchies may be restructured, and a person might find themselves subordinate to a "robot" (which, in turn, is subordinate to another person), a situation that could feel unusual, confusing, and unpleasant.

Helping such people find work should become one of the state’s priorities. However, I don’t believe that the advent of AI will shift the bar for "intellectual normality", so I wouldn’t consider this a case of mass job loss.

Secondly, for low-income individuals whose demanding jobs leave them cornered with no time or opportunity to learn (e.g., warehouse workers at certain corporations, many people in developing countries). Without the chance to retrain, they risk losing jobs without being able to quickly find new ones. Helping such people should have long been a priority for any healthy society. This issue has existed for a long time, and the adoption of AI will exacerbate it, though not dramatically. Addressing this problem should be part of the state’s long-term strategy rather than limited to specific measures aimed at easing AI integration.

Robots will not rapidly and massively replace manual labor

By robots, I mean physical robots, both humanoid and non-humanoid.

The question with robots is that even in the case of perfect software (let's say it's working at 100%), robots still need hardware (bodies, joints, bearings, hydraulics, etc.) and electricity.

For the rapid and widespread adoption of robots, the production of complex, reliable hardware would need to match or even surpass the scale of automobile manufacturing. Scaling up such production requires not only time but also a redistribution of resources and adaptation of production chains across the globe. To put it bluntly, you can’t simply start extracting and processing 5-15% more raw materials without impacting the rest of the industry.

It's even more complicated with electricity and batteries.

Firstly, current models of robots consume significantly more energy than humans. There has been no significant progress in reducing energy consumption, so replacing cheap physical labor with robots may not be profitable in some cases.

Secondly, there have long been concerns about the sufficiency of resources for producing batteries for cars and wearable devices. There’s no guarantee that the same rare earth elements will be enough to support the production of a large number of autonomous robots.

Thirdly, the planet's energy production infrastructure has been under strain for the past 10-15 years, and there’s already not enough electricity to meet all demands. Data centers and crypto farms are built near large power plants with cheap energy for a reason — they are not economically viable elsewhere. Mass robotization could easily double the energy demands of the residential sector, which would be impossible to meet without a breakthrough in energy production — something that hasn’t happened yet.

Therefore, while there will be a gradual move towards robotization, it will be more likely to occur in complex manufacturing environments than in the service sector, households, or low-paid jobs in developing countries.

Works of proactive professionals are safe for the next 10 years

By proactive professionals, I mean anyone who has received specialized education, works in their field, accumulates experience and knowledge in it, and expands their horizons and scope of responsibility.

Any professional will note that their field has a massive, invisible layer of knowledge and skills that separates a novice from an expert. This layer is usually comparable in size to, or even exceeds, the content of higher education. The defining characteristic of a professional’s knowledge and skills is their informal nature. They aren’t written down in textbooks and are often not even expressed in words — they exist as mental images in our heads.

Of course, a professional can express them in words, but it takes time and effort, and the result won’t be 100% accurate. As a blogger and technical lead, I can vouch for this — extracting knowledge from one’s head and transferring it somewhere in a way that’s understandable to readers or colleagues is difficult and time-consuming. In essence, it’s a separate skill and a lot of work.

Consequently, this knowledge and these skills are not present in the training data for neural networks in an explicit form. Neural networks learn them indirectly, but the results are not the best.

The lack of knowledge affects the ability of neural networks to provide high-quality answers to deep professional questions. I encounter this all the time — LLMs often can’t focus on the right things in their answers and generate correct but completely unnecessary information. To focus them, one has to write long introductions with all the context (when it may be enough a few words for colleagues), and they don’t always help.

A similar logic is valid not only for professional knowledge but also for knowledge about an active project. Most of this knowledge is also in people’s minds in the form of biases, implicit agreements, fantasies, hints, and so on. Few people can express it in full text (I can’t), as few people have the time (I don’t) to write an essay that will be outdated in a month.

Therefore, in the near future, most AI progress will focus on solving very specific and narrowly scoped tasks, with the assumption that professionals will retain control over choosing the direction of work. Seasoned professionals will need to retrain to use new tools, while young specialists will have to acquire more product skills than their predecessors needed at the start of their careers. And that’s a good thing.

To automate the work of professionals, we need to solve several problems:

  1. Create a system that continuously learns from new experimental data, including erroneous data, deliberately incorrect data, or data valid only for specific cases of a particular project. So far, there has been no significant progress in this area.
  2. Create a system that completely inverts the flow of control/ownership in a project. Such a system would have to own all high-level information about the project and delegate specific tasks to people or AI agents. In theory, nothing prevents us from creating a similar system, but the work is long and complex and may require excessive formalization of currently informal interactions.

That's why I don't expect such systems to appear soon.

The uncertainty of achieving strong AI with current technologies"

As I mentioned earlier, we shouldn't expect strong AI based on current neural network architectures.

But creating strong AI based on current neural networks and some architectural overlay is theoretically possible.

On the one hand:

  • The same RAG combined with several neural networks for updating a knowledge base seems like a potential way to close the feedback loop (… -> information gathering -> analysis -> synthesis -> action -> …) and get a learning AI.
  • Some experiments with agent communication in games show that it's relatively easy to get behavior that looks meaningful.

On the other hand:

  • Basic experiments with agent communication have not led to scaled-up continuations, which suggests the presence of hidden complexity.
  • We still can't describe the logical architecture of the brain that underlies thinking (we can describe the physical architecture, but it's like listing the molecules in a cake to describe its taste). So we don't know where to look and can't estimate the complexity of the necessary architectural overlay (other than that it will not be simple).
  • Even if we assume that such an architectural overlay can be created with our current capabilities, there is no reason to believe that it will work at a reasonable speed on reasonable resources. Perhaps a strong AI will require the combination of billions of agents, or a 1000-fold acceleration of communication between them, or something else that is currently beyond our capabilities.

Simplifying, I'm ready to admit that a project on the scale of Apollo or Manhattan Project aimed at creating strong AI might be possible today. However, it is unlikely to result in the integration of such AI into everyday life — just as Apollo did not lead to the colonization of the Moon or even to a permanent human presence there.

We may say that strong AI is in a Schrödinger's cat state right now — when the box is opened, it will either work or not.

If an architectural breakthrough doesn’t happen

Well, then it doesn’t happen, and we’ll continue living as we always have.

If an architectural breakthrough happens

Well, then it happens — does it really matter who’s on the other end of the line during remote work?

In fact, I believe that the arrival of strong AI would bring more problems for humanity.

We already have plenty of chauvinists, racists, sexists, *phobes, perpetually offended individuals, and other unpleasant persons. With the advent of strong AI, their numbers will only grow, and new forms of chauvinism, racism, sexism, phobias will emerge. Foolishness will increase, and AI won’t help with that — these problems are rooted in our institutions and culture, and it’s up to us to address them.

Strong AI will add to the list of fundamental problems, but the economic benefits (compared to advanced tools based on neural networks) may not be that great.

If an architectural breakthrough happens and we create a very smart AI

Well, then we create it, and that’s that. :-)"

Humanity has always been in a state of interaction between individuals with different levels of intelligence:

  • The intellect of two mature people often differs significantly, sometimes by 1.5 times, in special cases by a factor of 2.
  • Mature people somehow find common ground with children and the elderly, even though the difference in knowledge and thinking can be significant.
  • Professionals (in one field) find common ground with non-professionals (in the same field). After all, we all use the services of lawyers and even dentists! Not to mention psychologists. Somehow it all works.

So if AI suddenly becomes slightly smarter than a person, I don’t see any problems — we’ll adapt.

I don’t believe in the appearance of a god-like super AI, the logic of which we won’t be able to understand in a reasonable amount of time. Let's first create smart NPCs in games and learn to elect smart deputies, and then we can discuss super AI that will decide the fate of humanity.

Social risks

I don’t see critical risks (but there are a lot of non-critical) in the fields of economics, ecology, resources, or production — almost nowhere, except for the long-term consequences for the social sphere.

Degradation of expertise

New tools allow real experts to become even more informed, make better decisions, and work more efficiently.

But these tools don’t simplify the delivery of new complex ideas to other people.

On the other hand, these same tools allow any charlatan to appear smart and knowledgeable. Debating a hypothetical anti-vaxxer armed with a modern language model (in front of an unprepared audience) is significantly harder than debating an anti-vaxxer without one.

I have encountered the opinion that neural networks will help anyone to fact-check information. I wouldn’t count on it:

  • First, a person should have the habit of fact-checking, which most people don’t, and it’s hard to develop.
  • Second, a person should have the desire to fact-check a concrete piece of information. Why fact-check if it looks convincing?
  • Third, neural networks are complex tools. Like any complex tool, they need to be learned, and not everyone will be able to use them effectively.

That’s why I expect large waves of various pseudo-scientific and pseudo-rational nonsense, along with a decline in the quality of governance in most of countries.

Polarization of education

The use of AI by someone trained to work with information will make them even more informed and educated. This is because they will learn to use AI effectively and will also learn from the results it provides.

The use of AI by an untrained person will create an illusion of expertise in themselves. Such a person will neither learn nor critically evaluate the results of their interactions with AI or the information they receive. Among programmers, there are jokes about coding by copy-pasting from StackOverflow. Imagine that problem magnified a hundredfold and spread across every profession.

As a result, people with access to good education and a culture that encourages self-education will become more successful. People without access to such benefits (the majority, unfortunately) will become less successful.

Summary

  • We won’t create paradise on Earth.
  • Strong AI won’t arrive to save us from ourselves.
  • Neural networks are becoming essential work tools, and we need to learn how to use them.
  • AI will find it hard to take your job if you stay updated on trends, keep learning, and work diligently.
  • We should support those who have no opportunity to learn new skills.
  • Learning to work with information and developing critical thinking is crucial.
  • Developing product-oriented thinking is essential.