Published Date : 30/07/2025
Welcome to TechScape. Today, we’re diving into the massive investments tech companies are making in artificial intelligence (AI) infrastructure. The goal is to develop the world’s most advanced AI, which is expected to supercharge their bottom lines and keep investors and Wall Street happy. However, this massive spending comes with significant environmental and ethical concerns.
Tech companies are in a fierce battle to claim the title of having the world’s most advanced AI. This means spending billions on data centers and other physical infrastructure to house and power the supercomputers needed for AI. It also means a significant drain on natural resources and the power grid in the areas surrounding these data centers.
Last week’s earnings reports made it clear that tech firms are forging ahead. Google announced it was planning to spend $85 billion on building out its AI and cloud infrastructure just in 2025—$10 billion more than it initially predicted. The company expects this spending to increase again in 2026. For context, Google reported $94 billion in revenue in the second quarter of this year. Chief executive Sundar Pichai said Google is in a “tight supply environment” when it comes to the infrastructure needed to support AI processing and compute. The results of this increased spending would still take years to be realized, he said.
Google isn’t alone in this massive spending spree. Amazon has said it plans to spend $100 billion in 2025, with the “vast majority” of this budget going to powering the AI capabilities of its cloud division. As a point of comparison, Amazon spent just under $80 billion in 2024. “Sometimes people make the assumption that if you’re able to decrease the cost of any type of technology component … that somehow it leads to less total spend in technology,” said Amazon’s CEO Andy Jassy during an earnings call in February. “We’ve never seen that to be the case.”
Meta, too, has upped the amount it plans to spend on AI infrastructure. In June, Mark Zuckerberg said the company planned to spend “hundreds of billions” of dollars on building out a network of massive data centers across the US, including one that the firm expects to be up and running in 2026. Originally, executives said the firm was projected to spend $65 billion in 2025 but adjusted that to anywhere between $64 billion and $72 billion. Meta and Amazon report earnings this week.
Artificial intelligence companies have come under fire for cannibalizing creative industries. Artists have seen their work used without their permission as companies train their algorithms. Creative teams have shrunk and been laid off as parts of their work are being done by AI. “It will mean that 95% of what marketers use agencies, strategists, and creative professionals for today will easily, nearly instantly, and at almost no cost be handled by AI,” Sam Altman, the CEO of OpenAI, has said. “No problem.”
In response, coalitions of artists have launched several copyright lawsuits against the top AI companies, including OpenAI, Meta, Microsoft, Google, and Anthropic. The companies say that under the “fair use” doctrine they should be able to use copyrighted material for free and without consent. Artists, including names such as Sarah Silverman and Ta-Nehisi Coates, say the companies shouldn’t be able to profit off their work. So far, the AI companies are winning.
Adobe, the software company best known for making creative tools such as Photoshop, says it’s trying to walk the line between developing useful AI programs and making sure artists aren’t getting the short end of the stick. The company has introduced two “creator-safe” tools aimed at tackling issues around copyright and intellectual property. One is its Firefly AI model, which Adobe says is trained only on licensed or public-domain content. The other is the Adobe Content Authenticity web app, which lets photographers and other visual artists indicate when they don’t want their work to be used to train AI and also lets them add credentials to their digital creations.
Artists can “apply a signature to it in the same way that a photographer might sign a photo or a sculptor would etch their initials into a sculpture,” said Andy Parsons, a senior director at Adobe who oversees the company’s work on content authenticity. We spoke with Parsons about the burgeoning world of AI and what it means for creators.
Q: What do you see as the biggest issues that creators and artists are facing with the advent of AI, and generative AI?
I think there’s one prevailing issue, which is the concern that various AI techniques will compete with human ingenuity and with artists of all kinds. And that goes for agencies, publishers, individual creators.
Q: Is Adobe Firefly one of the ways that Adobe is trying to address these problems and make sure that creators’ work is not ripped off?
Yeah, absolutely. From the beginning of Adobe Firefly, we followed two guiding principles. One is to make sure that Adobe Firefly is not trained on publicly available content. It’s only trained on things that Adobe and the Firefly team have exclusive rights to use. That means that it can’t do certain things. It cannot make a photo of a celebrity, because that celebrity’s likeness we would consider guarded and potentially protected.
The second thing we built in from the beginning is transparency, so knowing that something that comes out of Firefly was generated by AI. This is what we call content provenance, or content authenticity. It’s making clear something is a photograph or made by an individual artist as opposed to made by AI.
Q: What is the Adobe Firefly trained on?
It’s a combination of Adobe Stock and some licensed datasets. It’s trained on things that Adobe has clear rights to use in this manner.
Q: How do tech companies like Adobe avoid copyrighted materials sneaking into the datasets?
We have licensed and clear rights to all of the data that goes into that dataset. There’s an entire team devoted to trust, safety, and assurances that the material is available to be used. We don’t crawl the open web, because as soon as you do that, you do risk potentially infringing on someone’s intellectual property. Our feeling is it’s not always the case that more training data is better.
Q: What does the future of human creativity look like now that we’re living in this new world with generative AI?
When it comes to content authenticity, there’s that “nutrition label” idea we sometimes talk about. If you walk into a food store, you have a fundamental right that’s fulfilled in most democratic societies to know what’s in the food that you’re going to serve your family. And we think the same is true of digital content. We have a fundamental right to know what it is.
The UK has also rolled out its new online safety rules after a long lead-up. These rules aim to protect users from harmful content and ensure a safer online environment. The regulations will impact a wide range of online platforms and services, requiring them to implement robust safety measures and adhere to strict guidelines. This move is part of a broader effort to address the growing concerns over online safety and the impact of AI and other technologies on society.
Q: What is the main goal of tech companies investing in AI infrastructure?
A: The main goal is to develop the world’s most advanced AI, which is expected to supercharge their bottom lines and keep investors and Wall Street happy.
Q: How much is Google planning to spend on AI and cloud infrastructure in 2025?
A: Google is planning to spend $85 billion on building out its AI and cloud infrastructure in 2025, which is $10 billion more than initially predicted.
Q: What is the concern regarding AI and natural resources?
A: The concern is that the massive data centers and supercomputers needed for AI development can lead to a significant drain on natural resources and the power grid.
Q: What are the main issues artists are facing with the advent of AI?
A: The main issues include the use of their work without permission, potential loss of jobs, and the risk of their intellectual property being used to train AI algorithms.
Q: What is Adobe Firefly and how does it address these issues?
A: Adobe Firefly is an AI model trained only on licensed or public-domain content, ensuring it does not infringe on artists' rights. It also includes transparency features to indicate when content is generated by AI.