Over the years, AI researchers have defined several milestones that have significantly advanced artificial intelligence, sometimes achieving performance comparable to human intelligence.
For instance, AI-based summarization systems use machine learning models to analyze documents, extract key points, and produce clear and relevant summaries. However, a superior intelligence could surpass this specialization. It would be capable of solving problems across various domains autonomously, without requiring intervention or specific programming for each new task. This is what we call Artificial General Intelligence (AGI).
Artificial General Intelligence (AGI): What is it?
Artificial General Intelligence refers to an AI system’s ability to replicate human cognitive faculties, enabling it to solve problems, even when faced with new or unknown tasks.
This entails a system equipped with comprehensive knowledge and cognitive computing capabilities, making its performance indistinguishable from that of a human. It could even greatly surpass these abilities, reinforced by its aptitude to access and process massive amounts of data at incredible speeds.
Currently, while AI systems can learn to perform novel tasks, they still require specific and tailored training. For example, for a large language model (LLM) to function effectively as a medical chatbot, it must first be fine-tuned with extensive domain-specific datasets. This highlights the limitations of modern AI, which must be carefully adapted to each new context, unlike the autonomy and versatility envisioned for AGI.
Specialized (Weak) AI vs. General (Strong) AI
- Specialized/Weak AI
This is the type of AI we use in most applications today. It is designed to perform specific tasks, such as recognizing faces, translating texts, or analyzing data.
AI is trained on a dataset exclusive to a particular domain. The data is used to create models capable of identifying patterns or making decisions. Once trained, the AI is used to execute the task for which it was designed. IBM’s Watson supercomputer, expert systems, and self-driving cars are examples of weak AI.
Even with the current state of technology, the capacity of these systems to access and process data is impressive. For example, ROSS, a legal expert system often referred to as the “AI lawyer,” can extract data from about a billion text documents, analyze the information, and provide precise answers to complex questions in less than three seconds.
Despite these remarkable advancements, modern AI exhibits several limitations that underscore its inability to reproduce true general intelligence.
Limited Scope: A specialized AI is developed to solve a specific problem or perform a well-defined task. It cannot operate outside this framework.
Data Dependency: These systems require specific datasets for training. Their effectiveness, therefore, depends on the quality, quantity, and relevance of the data provided.
Lack of Autonomy or General Adaptability: A specialized AI cannot learn or adapt on its own to a new domain without human intervention. For example, a facial recognition AI cannot be used to analyze medical MRIs without being completely reprogrammed or retrained.
- General/Strong AI
As explained in the introduction, AGI refers to artificial intelligence capable of performing any intellectual task that a human could accomplish, with a level of understanding and performance that is similar to or exceeds human capabilities.
Here are some key elements that could enable its functionality:
Advanced Cognitive Modeling: AGI would be designed to mimic the structure and processes of the human brain, such as artificial neural networks, but on a much more complex scale. It should be able to learn independently by observing its environment, much like a child does.
Cumulative Learning: It could learn throughout its “existence,” accumulating knowledge, combining it, and applying it to new situations.
Adaptive Memory and Reasoning: AGI would have integrated long-term memory, enabling it to recall past experiences while reasoning about future scenarios based on available data.
Knowledge Transfer: It would be able to use skills acquired in one domain to solve problems in another. For example, if it learns to cook by following a recipe, it could then apply similar principles to build furniture by following a blueprint.
The technical challenges of AGI
Designing such a complex system requires advancements not only in fields like machine learning, computational neuroscience, and cognitive modeling but also in addressing complex issues related to ethics, safety, and interoperability.
This ambitious goal requires overcoming current limitations, particularly in computational power, algorithm generalization, and contextual understanding, while ensuring human control over potentially autonomous entities.
1. Computational Power
The human brain, with its 86 billion neurons connected by trillions of synapses, operates as an extraordinarily efficient bioelectrical machine. Reproducing this complexity in artificial intelligence demands astronomical computational power.
The AGI models would require supercomputers capable of processing massive data volumes in real-time, with energy efficiency comparable to the human brain, which consumes only about 20 watts. Current architectures like GPUs and TPUs are not designed to perfectly emulate biological processes. This necessitates new approaches in hardware design, such as neuromorphic or quantum computing.
2. Advanced Algorithms
Current AI models, such as deep neural networks, are specialized but inherently rigid. They excel at specific tasks after intensive training but cannot spontaneously adapt to unknown environments or unforeseen tasks. AGI must simulate cognitive processes akin to humans, such as intuition and creative thinking.
This demands algorithms capable of abstraction, causal reasoning, and meta-learning (learning how to learn). These algorithms should combine supervised, unsupervised, and reinforcement learning to adapt to a variety of situations.
3. Memory and Continuous Learning
True AGI should be able to retain and utilize accumulated knowledge while continuing to learn and improve autonomously. Modern systems often suffer from “catastrophic forgetting,” where learning new tasks can overwrite previously acquired knowledge. AGI should adopt mechanisms inspired by human memory, combining short-term (operational) and long-term (structured) memory.
4. Understanding the Physical World
AGI cannot be limited to abstract or numerical data processing; it must also comprehend and interact with the physical world. This includes vision, hearing, and touch, with the ability to interpret these inputs contextually. Recognizing an object involves understanding its use, texture, physical properties, and role within an environment.
The major ethical questions raised by AGI
Can AGI Be Conscious?
The question of AGI consciousness raises profound philosophical, scientific, and ethical issues. If AGI developed a form of consciousness, it would mean it could experience subjectivity, emotions, or self-perception as a distinct being. Should it be granted rights similar to humans or animals?
How Can Bias Be Avoided?
Current AI often reproduces biases present in training data. AGI, with its superior learning and generalization capabilities, could either transcend these biases by identifying and correcting biased patterns in data or amplify them if not designed with proper safeguards.
Who Controls AGI?
Determining who will control AGI is crucial to ensuring it serves the common good rather than private or partisan interests. An international governance model involving companies, governments, NGOs, and researchers could define guiding principles and ethical standards.
When will artificial general intelligence be achieved?
Predictions regarding the development of AGI vary widely: it could take a few years, a decade, or even two. However, there is a growing consensus that AGI will likely emerge within our lifetime.
Here’s what some major tech figures have to say:
- Sam Altman, CEO of OpenAI, estimates that AGI could be achieved in a “reasonably near future.” However, he believes its impact on the world and on employment might be far less significant than people currently imagine.
- Shane Legg, co-founder of Google DeepMind, stated in an interview with a tech podcaster that there is a 50% chance AGI will be realized by 2028. He had previously expressed this belief on his blog in 2011.
- Elon Musk predicted that fully realized AGI could appear by 2029. According to him, rather than replacing humans, AI will encourage them to become more interesting and useful.
- Ray Kurzweil, futurist, inventor, and author, shared a similar prediction during the SXSW conference in 2017 in Texas. Known for his numerous technological forecasts, 86% of which have proven correct, he stated: “By 2029, computers will have intelligence equivalent to that of humans. This will lead to computers integrating human intelligence, which we can connect to our brains and to the cloud, thus expanding what we are. Today, this is not just a futuristic vision. It already partially exists and is accelerating.”
AGI: Where do we stand?
While advanced models like GPT-4 and other neural network-based systems demonstrate impressive capabilities, they remain far from true AGI. These systems lack deep understanding and cannot reason or learn beyond the data they were trained on.
Artificial General Intelligence represents a fascinating goal that could transform society. If realized, it could solve previously insurmountable problems and push the boundaries of our understanding of intelligence. However, its potential comes with significant technical, ethical, and societal challenges.
We are still in an exploratory phase, but discussions around AGI remind us that science fiction may not be so far from reality. The question is not only if AGI will be created but how and under what conditions it will integrate into our world.