Published Date: 6/09/2024
Although AGI is a milestone that still eludes science, some researchers say that it is only a matter of years before humanity builds the first such model. Scientists in China have created a new computing architecture that can train advanced artificial intelligence (AI) models while consuming fewer computing resources — and they hope that it will one day lead to artificial general intelligence (AGI).
The most advanced AI models today — predominantly large language models (LLMs) like ChatGPT or Claude 3 — use neural networks. These are collections of machine learning algorithms layered to process data in a way that's similar to the human brain and weigh up different options to arrive at conclusions.
However, LLMs are currently limited because they can't perform beyond the confines of their training data and can't reason well like humans. AGI is a hypothetical system that can reason, contextualize, edit its own code and understand or learn any intellectual task that a human can.
Today, creating smarter AI systems relies on building even larger neural networks. Some scientists believe neural networks could lead to AGI if scaled up sufficiently. But this may be impractical, given that energy consumption and the demand for computing resources will also scale up with it.
Other researchers suggest novel architectures or a combination of different computing architectures are needed to achieve a future AGI system. In that vein, a new study published Aug. 16 in the journal Nature Computational Science proposes a novel computing architecture inspired by the human brain that is expected to eliminate the practical issues of scaling up neural networks.
The human brain has 100 billion neurons and nearly 1,000 trillion synaptic connections — with each neuron benefitting from a rich and diverse internal structure. However, its power consumption is only around 20 watts.
Q: What is Artificial General Intelligence (AGI)?
A: AGI is a hypothetical system that can reason, contextualize, edit its own code and understand or learn any intellectual task that a human can.
Q: What is the current limitation of Large Language Models (LLMs)?
A: LLMs are currently limited because they can't perform beyond the confines of their training data and can't reason well like humans.
Q: What is the Hodgkin-Huxley (HH) network?
A: Hodgkin-Huxley is a computation model that simulates neural activity and shows the highest accuracy in capturing neuronal spikes.
Q: What is SingularityNET's approach to achieving AGI?
A: SingularityNET has proposed building a supercomputing network that relies on a distributed network of different architectures to train a future AGI model.