Published Date : 12/09/2025
Generative AI has evolved beyond a mere tool that can generate images based on text prompts; it's delivering real-life impact in society across medicine, education, computing, entertainment, and more. However, its implications can be viewed as both positive and negative.
On one hand, it shows great promise in the healthcare sector, with new technology capable of detecting early signs of dementia and even cancer, making it easier to begin treatment before the condition spirals out of control. On the other hand, this sophisticated technology poses an existential threat, with a 99.9999% probability of ending humanity.
Even Google DeepMind's CEO, Demis Hassabis, claims AGI could be achieved soon but warns that society isn't ready to handle all that it entails. He claims that the prospects keep him up at night. And as it happens, the dead internet theory could become a reality within the next three years, as AI-generated content appears to have surpassed human-written material. For context, the dead internet theory suggests that the internet predominantly consists of bot activity and AI-generated content manipulated by algorithmic curations. It further suggests that the efforts are designed to establish control over the population and reduce organic human activity.
Perhaps more concerning, cybersecurity firm Imperva released a report called 'Bad Bot' in 2024, which claimed that approximately half of all traffic on the internet was AI-generated. Since 2021, that figure has skyrocketed from 42.35% of internet traffic being AI-generated to 49.6% in 2023. Based on this trend, most of the internet traffic and content will mostly be from bots and automated using AI. Over the past few years, we've seen leading publications lay off most of their staff and replace them using AI. Consequently, a report from the Pew Research Center claims that 38% of human-made websites from 2014 no longer exist as a result of a process called 'link rot.'
Last year, a study by Amazon Web Services (AWS) researchers suggested that 57% of content published online is AI-generated or translated using an AI algorithm, negatively impacting the quality of search results. Microsoft and OpenAI have found themselves fighting several copyright infringement lawsuits in the corridors of justice. This is because AI-powered tools like Copilot and ChatGPT heavily lean on online content for their training. However, a separate report suggested that the technology has hit a wall due to a lack of high-quality content for training, preventing top AI labs like OpenAI, Google, and Anthropic from developing advanced AI models.
Even OpenAI's CEO thinks the dead internet theory is coming true. 'I never took the dead internet theory that seriously, but it seems like there are really a lot of LLM-run Twitter accounts now,' OpenAI CEO Sam Altman posted on X. This statement underscores the growing concern among tech leaders about the rapid proliferation of AI-generated content and its potential to reshape the internet as we know it.
Q: What is the dead internet theory?
A: The dead internet theory suggests that the internet is predominantly composed of bot activity and AI-generated content, manipulated by algorithmic curations designed to control the population and reduce organic human activity.
Q: What are the implications of AI-generated content on the internet?
A: AI-generated content can lead to a decrease in human-created content, impact the quality of search results, and raise concerns about copyright infringement and the authenticity of information online.
Q: What did Imperva's 'Bad Bot' report reveal?
A: Imperva's 'Bad Bot' report claimed that approximately half of all internet traffic was AI-generated, with the figure increasing from 42.35% in 2021 to 49.6% in 2023.
Q: What concerns does Sam Altman have about the internet?
A: Sam Altman, CEO of OpenAI, is concerned about the increase in AI-generated content and bot activity on platforms like Twitter, which could lead to the 'dead internet theory' becoming a reality.
Q: What are the challenges faced by AI labs in developing advanced AI models?
A: AI labs like OpenAI, Google, and Anthropic face challenges due to a lack of high-quality content for training, which has prevented them from developing more advanced AI models.