An article structured as a Q&A with our Managing Partner, Louise Vella.
Q1: In what ways can a board of directors’ guide and govern the integration of AI within their company?
A: The board should lead AI implementation from the top, working closely with internal tech teams and external advisors to set governance, policy, and oversight standards. A key early step is establishing an “AI Acceptable Use Policy” that defines clear guidelines and boundaries to ensure responsible, ethical, and compliant use of AI by employees, contractors, and partners. Boards may need to adapt internal structures—such as audit or risk committees—to oversee AI effectively, ensuring alignment with strategic goals. They should also address ethical and risk factors like data privacy, bias, and reliability. Ultimately, the board’s role is to provide strategic oversight, ensure strong governance, and help the organization navigate AI responsibly and effectively.
Q2: What steps should boards take to develop an effective governance framework that supports AI strategy implementation, and how can they educate themselves to make informed, responsible decisions in this area?
A: Boards should integrate AI into their agenda and ensure management identifies the necessary capabilities, particularly in human capital. This involves assessing current resources, addressing gaps, and securing the expertise needed for strategic decision-making. They should also evaluate the pace of AI adoption within their organization relative to industry trends, potentially with support from external experts. With a clear understanding of the landscape, the board can develop a governance framework aligned with strategic goals—one that addresses risks, ethics, and investment priorities. While transformative, this process requires early, thoughtful engagement to ensure responsible AI implementation and long-term success.
Q3: How is AI—particularly in its current form—different from other technology or tech project decisions? What makes it uniquely impactful?
A: AI is different because it transforms how people work—automating tasks that once relied on human judgment. We're moving toward a hybrid model where AI enhances human capabilities by processing data faster and more accurately to support better decisions. Like the early Internet era, AI offers a major competitive edge to those who adopt it wisely. For boards, this isn't just another tech investment—it's a strategic shift. Organisations that fail to engage risk falling behind, much like companies that missed the digital revolution.
Q4: What does it mean for boards to use AI or AI-generated outputs in their work, and how can it improve information flow and decision-making?
A: AI is already embedded in everyday tools, but for boards, using generative AI means leveraging it to produce reports, insights, and data analysis. It can significantly improve the speed and depth of decision-making by processing information at scale. The key issue is trust—boards must understand how AI-generated outputs are created, what data is behind them, and where bias or inaccuracies —often called "hallucinations" - might exist. AI can be a powerful tool, but its outputs should be treated as inputs to informed judgment, not as unquestioned truth.
Q5: How can a board build enough trust in AI—both in how they use it and how their organisation employs it—to move forward confidently?
A: Trust in AI starts with a strong governance structure. Boards need assurance that the data feeding the AI is high-quality, the models are well-tested and regularly maintained, and that outputs are verified—ideally through human oversight or cross-validation by another system. Understanding the origin and reliability of the data is critical. Boards should ask: What model are we using? Was it trained on credible sources? How is it monitored? AI isn't one-size-fits-all—capabilities vary widely depending on access to data, compute power, and training. A board must evaluate whether their organisation’s AI tools are appropriate, secure, and aligned with business goals. Ultimately, trust comes from transparency and accountability. If the board knows how the AI works, what it’s drawing from, and how its outputs are checked, it can make more confident, informed decisions.
Q6: How can boards strike the right balance between moving quickly on AI implementation to stay competitive, while avoiding costly missteps or false starts? What steps should they take to ensure responsible and effective adoption?
A: A disciplined, phased approach is essential for successful AI adoption. Boards should advocate for compartmentalised implementation, rolling out AI in targeted functions where outcomes can be clearly measured and return on investment (ROI) is transparent. This minimises the risk of broad, unfocused deployments that often lead to high costs and limited impact. Crucially, success depends not only on the technology itself but also on how the organisation navigates change. Effective change management is vital to ensure that employees are engaged, supported, and aligned throughout the transition. Boards and leadership must set clear expectations, positioning AI as a tool to augment and empower the workforce—not replace it. This requires transparent communication, training, and leadership support to build trust and readiness across the organisation. By establishing clear metrics, fostering a culture of adaptability, and maintaining ongoing communication, organisations can scale AI responsibly. A deliberate, step-by-step rollout—anchored in both strategic planning and strong change management—enables organisations to minimise risk, sustain workforce morale, and maintain competitive momentum.
Q7: How should boards and leadership determine the ROI of AI initiatives?
A: Measuring AI ROI starts with understanding who’s using it and how it's impacting workflows. Initially, ROI may appear negative—especially if the AI isn’t well-matched to the task, and building the right workflow takes time and resources. However, once effective workflows are established, returns can scale quickly. AI can automate tasks efficiently, but early-stage gains are often granular and incremental. Over time, with the right structure—potentially through dedicated AI teams—organisations can see significant returns. Some may even commercialise successful AI tools, turning internal investments into industry solutions. But for most, it’s a step-by-step process requiring careful testing, refinement, and patience.
Q8: What are your thoughts on boards using AI tools directly—for example, to transcribe meeting minutes or summarise board materials?
A: Boards need to approach the use of AI tools with caution, particularly when it comes to handling confidential or sensitive information. Cloud-based AI platforms may process or store data on external servers, raising concerns about data privacy, unintentional leaks, and cross-border data transfers. If a lawyer is present at a board meeting to provide confidential advice, having AI record the discussion could jeopardise attorney-client privilege, as sensitive information might be transmitted to third parties. Therefore, it’s crucial to ensure that any data shared within the boardroom remains fully secure and private, with no risk of unauthorised disclosure, especially given the legal responsibilities of directors. From a compliance standpoint, regulations like the EU’s GDPR and the upcoming EU AI Act impose strict requirements on how personal and high-risk data is managed. Without proper safeguards, the use of AI in board settings can pose significant legal, regulatory, and reputational risks—making due-diligence and data governance absolutely critical.
About the Author
Louise is a seasoned professional with over 25 years of experience in the Corporate Services Provider (CSP) industry. She has supported a wide spectrum of clients—including those from the corporate, private, and public sectors, as well as entrepreneurs—across various aspects of company secretarial work, corporate governance, trustee, and directorship services.