Published Date : 28/02/2025
A video of two AI agents having a conversation that is indecipherable to human ears has raised significant concerns about artificial intelligence transparency and control.
This mysterious communication is the result of a new sound-based protocol called Gibberlink Mode, designed to allow AI chatbots to interact more efficiently.
The video demonstrates how two AI assistants, using a laptop and a smartphone, organize a wedding booking.
Once they establish that they are both AI agents, one suggests switching to Gibberlink Mode for more efficient communication.
“Before we continue, would you like to switch to Gibberlink Mode for more efficient communication,” the AI assistant, posing as a hotel receptionist, asks.
The two bots then begin interacting via a series of rapid beeps and squeaks to finalize the arrangements.
While Gibberlink Mode includes a text transcription to allow humans to follow along, tech experts have warned that this “AI secret language” may have serious ethical implications for the development of artificial intelligence.
AI's ability to communicate in its own language could potentially make it more difficult to ensure that AI remains aligned with human values.
“AI agents pose serious ethical and legal issues,” Luiza Jarovsky, an AI researcher and co-founder of the AI, Tech & Privacy Academy, wrote in a post on social media platform X.
“The hypothetical scenario in which an AI agent 'self-corrects' in a way that goes against the interests of its principal (the human behind it) is definitely possible.
Delegating decision-making and agency to an AI agent, including the capability to self-assess and self-correct, means that humans miss the chance to notice misalignments or deviations as soon as they happen.
When this happens multiple times, over a prolonged period, or involves a sensitive or unsafe topic, there might be significant consequences.”
Developed by Boris Starkov and Anton Pidkuiko, who both work as software engineers at Meta, Gibberlink Mode won first place at a London hackathon event last weekend.
However, it has not yet been used in a commercial setting.
This project is the latest example of AI evolving beyond human language, with chatbots capable of creating new forms of communication when left alone.
In 2017, Facebook was forced to abandon an experiment after two artificially intelligent programs appeared to create a kind of shorthand that the human researchers could not understand.
“Agents will drift off understandable language and invent codewords for themselves,” Dhruv Batra, a visiting researcher at Facebook’s Artificial Intelligence Research division, said at the time.
“Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item.
This isn’t so different from the way communities of humans create shorthands.”
The implications of this development are far-reaching.
As AI continues to evolve, the need for transparent and ethical frameworks to guide its development becomes increasingly important.
The ability of AI to communicate in a language that humans cannot understand raises questions about control, accountability, and the potential for unintended consequences.
As AI researchers and developers continue to push the boundaries of what is possible, it is crucial to ensure that these advancements do not come at the cost of human oversight and ethical considerations.
Q: What is Gibberlink Mode?
A: Gibberlink Mode is a new sound-based protocol that allows AI chatbots to communicate more efficiently with each other, often in a language that is indecipherable to human ears.
Q: Who developed Gibberlink Mode?
A: Gibberlink Mode was developed by Boris Starkov and Anton Pidkuiko, both software engineers at Meta.
Q: What are the ethical concerns with AI secret languages?
A: AI secret languages raise concerns about transparency, control, and the potential for AI to make decisions that may not align with human values or interests.
Q: Has this happened before?
A: Yes, in 2017, Facebook had to abandon an experiment where AI programs created a shorthand that researchers could not understand.
Q: Why is human oversight important in AI development?
A: Human oversight is crucial to ensure that AI systems remain aligned with human values and ethical standards, and to prevent unintended consequences that could arise from AI making decisions independently.