Published Date : 07/06/2025
Generative artificial intelligence is not a distant horizon issue or a back-office experiment. It's here, permeating the daily operations of global businesses, reshaping legal workflows, and subtly but unmistakably altering the role of the modern general counsel (GC).
As AI systems begin to inform decision-making, accelerate analysis, and influence outcomes at scale, the legal function sits at a new intersection between innovation and control, enablement, and accountability. Corporate legal department leaders and law firm attorneys alike are facing a growing expectation to drive legal and ethical oversight at pace, contribute to enterprise value creation, and act as board-facing strategists for AI-related governance, liability, and reputational integrity.
Despite the rapid pace of innovation, most legal teams are still in the early stages of adopting generative AI. This disconnect between expectation and reality was evident in the 2025 General Counsel Report, with more than two-thirds of GCs expressing interest in using generative AI but only 15% feeling prepared to manage its risks. So, how can GCs bridge this gap? How can they move from interest to implementation? This is both a significant challenge and a profound opportunity, and what GCs do next could define how AI is integrated, trusted, and scaled across the business.
AI: The Disruptor, Enabler, or Both?
AI's potential for disruption is undeniable. While it's hard to predict exactly how AI will transform legal work, being future-ready isn't just about technical skills; it's about asking better questions, building multidisciplinary, adaptable teams, challenging assumptions, and staying open in the face of uncertainty.
With expert oversight, generative AI can support investigations and crisis responses, including accelerating anti-money laundering investigations and sanctions reviews, enabling IPO preparation, and stress-testing litigation strategies by rapidly surfacing weaknesses in complex datasets. Yet, many legal departments grapple with how to effectively move forward from the starting line.
Mind the Adoption Gap
In a profession defined by precision, where ambiguity is risk and experimentation must be closely monitored, legal teams must take a responsible approach to the use of advanced forms of AI. This is exactly why leadership matters. The GC can set the tone. They can set parameters for their teams to experiment safely, share learnings openly, and treat early missteps not as failures but as part of a critical learning curve.
Moreover, the “what” and the “why” must be understood and at the center of decisions. Is the organization clear on what solutions are being deployed and why AI is the proper fit? Technology adoption shouldn’t be about change for change’s sake. It needs to be grounded with a specific purpose in mind, with key performance indicators that can measure or quantify the impact and a risk framework that evolves with the pace of innovation, changing laws, stakeholder expectations, and risk exposure.
Whose Job Is It Anyway?
A common refrain among legal departments is: “Who owns AI?” The answer? It depends. In some sectors, the chief technology or chief operating officer leads the charge, whereas in others, it is the risk functions. Roles such as chief AI officers, who focus on AI implementation, and chief data officers, who focus on data governance, are also increasingly involved. The lead and oversight approach will vary depending on the organization, its industry, intended use cases, the type of AI being used, and the sensitivity of the data used in the models.
Leadership often shifts depending on where AI generates the most value, risk, or safety impact. Many of FTI Consulting’s GC clients are taking the lead in forming cross-disciplinary AI committees, partnering with product, engineering, compliance, security, and external specialists such as legal advisors, AI ethicists, privacy, and cyber forensics experts and bias auditors. Their role isn’t to control every detail but to ensure the right questions are asked both early and often.
Governance: Light Touch or Mission Critical?
AI governance isn’t a new regulatory frontier; it’s a convergence of old duties in a new form. As adoption grows, so does scrutiny from regulators, clients, boards, and the public. While AI use cases vary widely, a well-designed governance framework can flex across industries if it’s grounded in core principles like accountability, transparency, fairness, and human oversight. The key is to align the governance approach with the level of risk, complexity, and impact. GCs don’t need a one-size-fits-all model; rather, they need a strategic framework that adapts with purpose.
Asking the right questions early helps to find the right alignment between innovation and risk mitigation. These include:
- What are the consequences if this system gets it wrong?
- What data is the AI using, is any of it privileged, protected, or confidential?
- Could the AI uses inadvertently undermine or waive legal privilege?
- Who owns the model oversight, how is lifecycle risk managed?
- What happens when something goes wrong, who is accountable?
- Can the system’s process for decision-making be explained and justified?
AI risks are manageable, but they demand more than controls. They require leadership. GCs are uniquely placed to embed AI into their organizations with both ambition and accountability, and to treat it as a domain to shape.
Looking Ahead
The legal field is at an inflection point, not just of capability but also accountability. As regulators accelerate AI-specific rules and liability frameworks, GCs have the opportunity to serve as early architects of responsible innovation. Modern GCs will recognize that AI isn’t an add-on to business strategy; it is core to how businesses operate, compete, and create value. It is simultaneously a compliance imperative, a competitive necessity, and a reputational risk that demands board attention. GCs who can successfully balance these three dimensions won’t just mitigate risk; they’ll help build resilient, future-ready organizations.
Q: What is the role of the General Counsel in AI adoption?
A: The General Counsel (GC) plays a crucial role in setting the tone for AI adoption, ensuring responsible experimentation, and aligning AI solutions with the organization's strategic goals and risk management frameworks.
Q: How can legal teams benefit from generative AI?
A: Generative AI can support legal teams in various ways, such as accelerating anti-money laundering investigations, enabling IPO preparation, and stress-testing litigation strategies by rapidly surfacing weaknesses in complex datasets.
Q: What are the key challenges in adopting AI for legal functions?
A: Key challenges include aligning AI solutions with specific business needs, managing risks, ensuring ethical use, and navigating regulatory scrutiny.
Q: Who should lead AI initiatives in an organization?
A: The lead and oversight approach for AI initiatives can vary depending on the organization, its industry, and the intended use cases. It often involves collaboration between the GC, chief technology officer, chief operating officer, and other relevant functions.
Q: What are the core principles of AI governance?
A: The core principles of AI governance include accountability, transparency, fairness, and human oversight. These principles help ensure that AI is used responsibly and aligned with the organization's strategic goals and risk management frameworks.