Published Date : 13/10/2025
Responsible AI requires clear ethical frameworks, robust governance, and iterative development as foundational elements. These include a strong governance framework, multi-skilled teams, and realistic business outcomes, said V2 AI founder Craig Howe. In an interview with ARN, Howe emphasized the challenges of integrating AI into business systems and the need for proper management due to the associated risks. Having a strong foundation in place is crucial because, in a business setting, AI touches everything.
We all know that confidence and trust in AI are paramount, he said. I think the responsible AI framework deserves more attention – probably more than some organizations give it. Instead of just grabbing something off the internet, or even from within AI itself, businesses need a structured approach.
Howe explained that the first step is understanding why a business is going through its own digital transformation process and then AI. I commonly see that they’re trying to simplify their business and organization, he said. They’re trying to drive efficiency, innovation, and growth. However, what’s required is to ask what this really means for teams? What does it mean for how businesses operate? What’s emerging around the AI operating model is responsibility and accountability.
In every decision, there are trade-offs. In common language, I think ethics, values—the moral compass—and responsible AI are key, explained Howe. There’s been a lot of development and talk about that. To me, it’s almost like the operating manual for what needs to be done, aligned with the values and mission the organization already has. Obviously, governance comes in as the guardrails.
Where things get compromised is probably early on, but this can be avoided if ethics, responsible AI frameworks, and some level of baseline governance, are in place for teams and people in an organization to use. It’s almost like the foundation of building a house – the analogy everyone uses. It doesn’t have to be completely perfect, at least on the governance side, but it has to be something you can continuously iterate on. You need that basis to get everyone on board to drive AI adoption and move that framework forward. I think that’s probably the first place where things go wrong.
AI has blends of cloud data, data science, software, software engineering, research, operations practices, as well as business and operational strategy. You have to look at all of those layers, to get to the agentic AI layer, explained Howe. For V2 AI, we look very strongly across probably four main pillars of this, including strategy and governance. We look at AI adoption to make sure that the aspirational strategy is met with the internal capability of the organization — that industrial AI.
How businesses move that to production requires all of those underlying layers up the stack, and then that must be overlaid with the right data, as well as security — to form a lot of the basis underneath. For Howe, AI is not a transactional tool, and it shouldn’t be bolted on to existing processes. When it comes to the technology, earlier iterations required deep science and research knowledge. For a lot of organizations, there’s a lot more product-driven transformation taking place off the back of AI, Howe said. This means if a business wants to use AI to be closer to customers and to get into that simplification and revenue growth, it requires whole business transformation and not just transacting necessarily in the same way they did with software, or software-as-a-service.
If they want to reimagine process and workflows and listening to customers, they probably must deal more on product to make that happen, rather than bolting that on. But then the same sort of operational things come along as well on the other side of that, and that’s what makes it difficult. Howe said businesses still need to prevent things, protect things, and correct AI through governance. Where misalignment happens in terms of AI is having to stop an agent eating bad data or parameters. If you’re detecting spot hallucinations or drifts off a model; correcting or retraining AI – then it has to be re-anchored. You have to roll it back and it it’s an agent – it needs a really clear set of tasks that aren’t ambiguous. In governance, you’ve always got those areas of boundaries where you’ve got to do that catch.
According to Howe, there are five common pitfalls and failures V2 AI has observed when the foundations aren’t in place. They will keep happening if you don’t employ a strategy of setting those metrics, he noted. For example, setting that measurable business outcome and working backwards. Having real use cases that are tested within the business that you want to run with and can help incrementally build your capability along the way. Otherwise, you build an AI platform, and nobody shows up to use it because they don’t know how. It’s the same things that have happened in software and cloud previously. Howe said one of the best ways to make sure the AI platform isn’t under-utilized is to upskill teams. Otherwise, one of them will fall short somewhere, along with the business case for the AI. You can’t have the team not having the capability, and you can’t have a solid platform or your governance, risk, and compliance connected.
For consulting firm V2 AI, having an AI value framework ties this altogether, because it includes procurement, commercials, trust, and safety and brings that together. This type of framework can start to provide a bit more coordination about some of the nuances that OpenAI and Anthropic. All those large language models (LLMs) aren’t necessarily familiar with traditional enterprise systems and processes. For example, OpenAI is a research-led type organization, he noted. When a business tries to mold ChatGPT into a service contract, service agreements, and commercialization, it doesn’t always fit in the way that a business wants.
This is important as one of AI’s greatest values lies in its ability for scale and hyper personalization. However, its content output is generic and not as genuine as human insight – which means human content still remains primary to trust. If there’s something that’s written by an agent or AI versus a human, the human one is going to be far more valued, Howe said. Especially as there is information and attention overload. I think anything that’s personalized to you, especially if it’s human-to-human, is going to be deemed so much more valuable.
That’s why getting that responsible AI, ethics framework, and governance in place is paramount. You hear me talk a lot about governance, risk, and compliance, but putting that value framework in place really pulls out the essence of where this really sits, Howe said.
Q: What is the importance of a strong governance framework in AI integration?
A: A strong governance framework is crucial in AI integration because it ensures ethical use, manages risks, and aligns AI initiatives with the organization's values and mission.
Q: How can businesses avoid common pitfalls in AI implementation?
A: Businesses can avoid common pitfalls in AI implementation by setting measurable business outcomes, testing real use cases, and continuously iterating on their AI framework.
Q: Why is upskilling teams important in AI adoption?
A: Upskilling teams is important in AI adoption because it ensures that employees have the necessary capabilities to effectively use and manage AI technologies, preventing under-utilization and ensuring the success of AI initiatives.
Q: What are the key components of an AI value framework?
A: The key components of an AI value framework include procurement, commercials, trust, safety, and governance, which together ensure a coordinated and responsible approach to AI integration.
Q: How does V2 AI help businesses with AI adoption?
A: V2 AI helps businesses with AI adoption by providing strategic guidance, governance frameworks, and practical solutions to drive innovation, efficiency, and growth while ensuring ethical and responsible AI use.