Published Date : 6/9/2025
I recently met a leader in the communications industry, and as we were chatting over coffee, he shared that he’s been hearing the phrase “two things can be true at the same time” a lot recently. This is also something I’ve been saying for a couple of years in discussions around politics, AI, and a variety of other issues.
In a polarized world in which opinions are shared as fact, data and statistics are made to fit ideologies, and the truth doesn’t seem to matter, expressing the view that two seemingly contradictory perspectives can both be true is a pragmatic way to find common ground. It recognizes that there are different ways to look at the same issues.
While making the effort to recognize different perspectives is healthy, ideologues (on either side of the political spectrum) are rarely interested in recognizing that there may be another side to an argument. When you are devoted to a particular position, the idea of an alternate version — or even the acknowledgement that there may be grey between black and white — creates cognitive dissonance.
Why bring this up? In part, because many of the discussions around AI seem to be somewhat bipolar. For many, AI is still the shiny new tool that will write great emails, automate the lengthy process of engaging with journalists, or lead to faster and easier content generation. For others, AI will kill jobs, dumb down the industry, or lead us to an existential doomsday in which the rise of content leads to the fall of engagement.
As someone who has spent significant time with AI companies, building tools, working with various LLMs, and discussing the impact of AI with lawmakers, I firmly believe that there are reasons to be optimistic and pessimistic. It’s not all black and white.
One way to frame the discussion of AI is to think of it like electricity. Electricity is key to powering the economy and it drives machines that do a lot of different things. Some of those are good. Some are not. Electricity gives us light, but it can also kill us.
AI, like electricity, is not intrinsically good or bad. It’s what we do with it that matters. As communicators, we have agency. We decide which choices will shape the future of the industry. We are not powerless. We are responsible for making decisions about how AI is employed. And, consequently, if we get this wrong, shame on us.
If communicators ultimately put the industry out of business by automating the engagement process with journalists, mass producing content to game LLM algorithms, and delegating thinking to chatbots — rather than helping the next generation of communicators hone their writing, editing, fact checking, and critical thinking skills — that will be on us. Equally, if we don’t leverage AI, we will miss an opportunity. AI can help streamline workflows and its access to the vast body of knowledge on the internet can lead to smarter, more informed engagement with reporters and impactful content.
A key takeaway from conversations with AI startups is that they are now able to do things that were simply not possible two years ago. One is making the restaurant booking process more efficient, leading to greater longevity of the businesses they work with - which keeps staff employed. Another company’s voice technology is enabling local government to serve constituents at any time and in any language.
As with every other generational technology shift, some jobs will disappear, and others will be created. Communicators need to avoid both being Panglossian, and the trap of seeing AI as the end of days. Finding the right use cases and effectively implementing the technology will be essential. The customer service line of a major financial institution states, “We are using AI to deliver exceptional customer service,” only to require the customer to repeat the same basic information three times. This underscores the distance between AI’s potential and the imperfect experience most of us see every day.
Pragmatic agency and corporate communications leaders will continue to experiment, invest time to understand what is now possible with AI. They will need to implement tools selectively, while carefully considering the impact of decisions on the industry in the years to come.
At this stage, there is an element of the blind leading the blind with AI. Startups are not omniscient. Communicators looking at applications as a magic bullet are going to be sorely disappointed. We are already seeing questions about the returns on the rush of gold into AI, significant gaps between the vision and experience, and the dark side of the technology in areas such as rising fraud and malicious deepfakes. As I have written previously, AI is creating new problems to solve – and is a driving force behind new solutions including content provenance authentication.
Just because you can do something doesn’t mean you should — without careful consideration of use cases, consequences, and implementation. AI has both enormous potential but also brings a whole new set of challenges and, potentially, existential risks. The idea that these two seemingly opposite things can be true underscores the weight of responsibility we have to get this right.
Q: What is the main concern about AI in the communications industry?
A: The main concern is that AI could automate many tasks, potentially leading to job loss and a decline in the quality of communication. However, it also offers opportunities to streamline workflows and enhance content creation.
Q: How can communicators ensure they use AI responsibly?
A: Communicators should carefully consider the use cases for AI, focusing on how it can enhance, rather than replace, human skills. They should also be mindful of the ethical implications and potential risks.
Q: What are some positive applications of AI in communications?
A: AI can automate routine tasks, provide data-driven insights, and help in content creation and distribution. It can also improve customer service and engagement with journalists.
Q: What are the potential risks of AI in communications?
A: The risks include job displacement, decreased human interaction, and the potential for AI-generated content to be misleading or harmful. There is also the risk of over-reliance on AI, which can lead to a loss of critical thinking skills.
Q: How can the industry balance the benefits and risks of AI?
A: The industry can balance the benefits and risks by adopting a pragmatic approach, experimenting with AI tools, and carefully evaluating their impact. Continuous learning and ethical guidelines are crucial for responsible AI use.