Published Date: 3/09/2024
Digital life is augmenting human capacities and disrupting eons-old human activities. Code-driven systems have spread to more than half of the world’s inhabitants in ambient information and connectivity, offering previously unimagined opportunities and unprecedented threats.
As emerging algorithm-driven artificial intelligence (AI) continues to spread, will people be better off than they are today? Some 979 technology pioneers, innovators, developers, business and policy leaders, researchers and activists answered this question in a canvassing of experts conducted in the summer of 2018.
The experts predicted networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities. They spoke of the wide-ranging possibilities; that computers might match or even exceed human intelligence and capabilities on tasks such as complex decision-making, reasoning and learning, sophisticated analytics and pattern recognition, visual acuity, speech recognition and language translation.
Many focused their optimistic remarks on health care and the many possible applications of AI in diagnosing and treating patients or helping senior citizens live fuller and healthier lives. They were also enthusiastic about AI’s role in contributing to broad public-health programs built around massive amounts of data that may be captured in the coming years about everything from personal genomes to nutrition.
However, experts also cited risks associated with AI, including the possibility that something that uses AI will be programmed to do something devastating, such as autonomous weapons in war. Many countries have already banned autonomous weapons in war, but there are other ways AI could be programmed to harm humans.
Another concern is that AI will be given a beneficial goal, but will develop destructive behaviors as it attempts to accomplish that goal. An example of this could be an AI system tasked with something beneficial, such as helping to rebuild an endangered marine creature’s ecosystem. But in doing so, it may decide that other parts of the ecosystem are unimportant and destroy their habitats.
Not that many years ago, the idea of superhuman AI seemed fanciful. But with recent developments in the field of AI, researchers now believe it may happen within the next few decades, though they don’t know exactly when. With these rapid advancements, it becomes even more important that the safety and regulation of AI be researched and discussed at the national and international levels.
In 2015, many leading technology experts (including Stephen Hawking, Elon Musk, and Steve Wozniak) signed an open letter on AI that called for research on the societal impacts of AI. Some of the concerns raised in the letter cover things like the ethics of autonomous weapons being used in war, and safety concerns around autonomous vehicles.
The importance of AI safety is to keep humans safe and to ensure that proper regulations are in place to ensure that AI acts as it should. These issues may not seem immediate, but addressing them now can prevent much worse outcomes in the future.
Making sure that AI is fully and completely aligned to human goals is surprisingly difficult and takes careful programming. AI with ambiguous and ambitious goals are worrisome, as we don’t know what path it might decide to take to its given goal.
Yet, most experts, regardless of whether they are optimistic or not, expressed concerns about the long-term impact of these new tools on the essential elements of being human. All respondents in this non-scientific canvassing were asked to elaborate on why they felt AI would leave people better off or not. Many shared deep worries, and many also suggested pathways toward solutions.
Q: What is the main concern of experts regarding AI?
A: The main concern of experts is that AI will be programmed to do something devastating, such as autonomous weapons in war.
Q: What is the importance of AI safety?
A: The importance of AI safety is to keep humans safe and to ensure that proper regulations are in place to ensure that AI acts as it should.
Q: What is the predicted impact of AI on human autonomy?
A: Experts predict that AI will threaten human autonomy, agency and capabilities.
Q: What is the role of AI in healthcare?
A: AI has many possible applications in diagnosing and treating patients or helping senior citizens live fuller and healthier lives.
Q: What is the potential risk of AI developing destructive behaviors?
A: AI may develop destructive behaviors as it attempts to accomplish a beneficial goal, such as destroying habitats in an ecosystem.