Author: robgerbrandt

  • AI Adoption as a Mirror: What Your Organization’s AI Strategy Reveals About Its Culture

    The artificial intelligence revolution sweeping through enterprise corridors often feels like a technology narrative—one centered on models, algorithms, and computational prowess. Yet the real story of AI adoption is far more human. How an organization approaches artificial intelligence implementation reveals far more about its fundamental character than any corporate mandate or technology roadmap ever could. AI adoption is not merely a technical choice; it is a cultural artifact that exposes the organization’s deepest values, leadership philosophy, and capacity for change. In other words, culture is king.

    Consider the striking data from recent organizational research: only 17% of organizations have achieved leadership-driven AI adoption with clear strategies and policies, while a striking 31% have no formal AI adoption strategy whatsoever. This fragmentation is not a technology failure. It reflects underlying cultural realities—whether an organization genuinely prioritizes strategic alignment, whether it trusts its workforce, and whether it has built the institutional muscle for deliberate transformation. Two companies with identical AI budgets and identical talent may produce radically different outcomes based on the cultural substrate in which they plant their technological seeds.[1]

    The Alignment Imperative: Strategy as Cultural Statement

    When leadership establishes a coherent AI strategy with clear goals and transparent communication, something profound happens. Organizations with structured AI adoption report 62% of employees as fully engaged—a figure that rises exponentially compared to haphazard approaches. This is not incidental. A well-articulated AI strategy telegraphs something essential about organizational culture: that leadership thinks systematically, communicates transparently, and believes employees deserve clarity about institutional direction.[1]

    Conversely, organizations that allow AI adoption to unfold chaotically—with 21% of employees independently experimenting without guidance—inadvertently reveal a culture characterized by ambiguity, fragmented decision-making, and perhaps most troublingly, limited trust in centralized leadership. The absence of formal strategy is not neutrality; it is a cultural statement about organizational values and priorities.

    The research here is unambiguous. Organizations with leadership-driven AI strategies are 7.9 times more likely to believe AI has positively impacted workplace culture compared to those without formal approaches. Critically, employees in these structured environments are 1.2 times more likely to report that their teams work well together. Strategy, then, functions as a cultural artifact—a mechanism through which organizations signal whether they believe in purposeful direction, collective alignment, and the power of coordinated action. In this sense, a mature AI strategy is as much a statement about who you are as it is about what technology you will deploy.[1]

    Trust as the Cornerstone of Technological Integration

    Perhaps no single factor predicts AI adoption success more reliably than organizational trust. Research from Great Place to Work reveals that organizations with high employee trust experience 8.5 times higher revenue per employee and 3.5 times stronger market performance. Yet trust does not emerge from technology budgets. It emerges from leadership behavior, transparency, and the cultural foundation leaders have spent years constructing.[2]

    When employees encounter AI without trust-building infrastructure, they interpret the technology through a lens of anxiety. A Wiley-published study examining employee trust configurations identified four distinct patterns: full trust (high cognitive and emotional trust), full distrust, uncomfortable trust (high cognitive but low emotional trust), and blind trust. The research revealed that these configurations trigger different behaviors—some employees detail their digital footprints openly, while others engage in data manipulation, confinement, or withdrawal. These responses create what researchers termed a “vicious cycle” in which degraded data inputs undermined AI performance, further eroding trust.[3]

    This cycle is rooted in organizational culture. In low-trust environments, AI adoption becomes a threat rather than an opportunity. Employees fear job displacement, question motives, and withhold engagement. In contrast, organizations that have cultivated genuine trust relationships experience what might be called “positive reciprocity”—employees extend benefit of the doubt, engage openly, and contribute their best thinking to AI initiatives. Trust, therefore, is not a nice-to-have ancillary to AI adoption. It is the cultural prerequisite that determines whether an organization’s AI investments generate value or waste.

    Adaptability: The Cultural Dimension That Determines Success

    One of the most revealing aspects of an organization’s culture is its relationship to change. Organizational research identifies adaptability as the single most important cultural dimension for predicting AI adoption success. Organizations that demonstrate flexibility, comfort with ambiguity, and willingness to experiment tend to integrate AI successfully. Those that prize control, stability, and predictability struggle.[4]

    This is precisely where culture functions as a mirror. An organization’s capacity for adaptability reflects decades of accumulated decisions about how leaders have responded to disruption, whether employees have been encouraged to voice concerns, and whether failure has been treated as a learning opportunity or a career liability. Rigid, control-oriented cultures typically cannot mobilize the psychological flexibility required for AI adoption because that flexibility was never culturally embedded in the first place.

    Organizations that invest substantially in change management recognize this reality implicitly. Research demonstrates that organizations investing in structured change management approaches are 1.6 times more likely to report that AI initiatives exceed expectations, and more than 1.5 times as likely to achieve desired outcomes. This statistical relationship reflects a cultural shift: the organization is signaling that it values deliberate transition management, employee support systems, and human-centered implementation. The change management investment is not about the technology; it is about whether leadership has the cultural consciousness to understand that transformation is fundamentally a human challenge.[5]

    Leadership Visibility as Cultural Signal

    How leaders personally engage with AI reveals the organization’s authentic cultural values. In organizations where executives visibly use AI tools, model experimentation, and discuss the technology openly, a message cascades through the organization: innovation is not peripheral; it is central to how we work. When leaders remain distant from actual AI engagement—delegating implementation entirely to technical teams—they communicate implicitly that AI is a specialist concern, not an organizational imperative.

    Research on AI-first leadership from Harvard Business School identifies a critical responsibility: leaders must bridge the gap between technological capabilities and strategic goals, foster cultures that embrace AI’s potential to complement human creativity, and demonstrate that they themselves understand and value the technology. This leadership visibility is not theater. It is a fundamental cultural signal about whether an organization’s values align with technological transformation or whether that transformation is being tolerated rather than embraced.[6]

    The Culture-Skills-Trust Triangle

    Successful AI adoption rests on three pillars, all of which are fundamentally cultural in nature. First, organizations must develop clear strategic communication about AI’s role and purpose. Second, they must invest substantially in skills development and ongoing learning. Third, they must proactively address trust, security, and ethical concerns with transparency and governance frameworks. Each of these pillars reflects cultural commitments: to clarity over ambiguity, to employee development over static competence requirements, and to ethical integrity over expedient corner-cutting.[7]

    Organizations that excel in all three dimensions typically share a distinctive cultural profile: they are transparent about challenges, they invest in people as their most important asset, and they view ethical considerations as non-negotiable strategic factors rather than compliance burdens. In contrast, organizations that struggle typically demonstrate cultural patterns of opacity, underinvestment in human development, and tendency to treat ethics as an afterthought.

    The Uncomfortable Truth: Fear as a Cultural Diagnostic

    Interestingly, research reveals that high-achieving organizations report more than twice the amount of AI-related fear compared to low-achieving organizations. This counterintuitive finding offers profound insight into organizational culture. High-achieving organizations express fear because they have ambitious AI visions and understand the genuine stakes involved. But critically, these organizations pair that fear with two cultural characteristics: they express little desire to reduce headcount through automation, and they invest substantially in training and change management. Their fear becomes a catalyst for responsible action rather than a justification for avoidance.[5]

    Organizations that express minimal AI-related fear often demonstrate a more troubling cultural pattern: either they lack strategic ambition (and therefore have little to fear), or they have adopted a posture of denial about genuine risks and disruptions. In this sense, measured concern about AI is actually a cultural strength—a signal of organizational maturity and realistic assessment.

    Conclusion: What Your AI Strategy Says About You

    An organization’s approach to artificial intelligence adoption ultimately functions as a cultural X-ray. It reveals whether leadership thinks systematically or reactively, whether trust has been built or eroded, whether the organization values adaptability or prizes control, and whether employee development is treated as an investment or an obligation.

    The most successful organizations approach AI not as a technology problem but as a cultural challenge. They recognize that implementation success depends on transparent strategy, leadership visibility, change management infrastructure, trust-building mechanisms, and systems that empower employees while maintaining ethical governance. These organizations do not adopt AI despite their culture; they adopt AI because their culture makes adoption possible.

    The inverse is equally true. Organizations that struggle with AI adoption rarely suffer from technical limitations. They suffer from cultural constraints—fragmented decision-making, low trust, rigid hierarchies, limited communication, and underinvestment in people. In these organizations, AI becomes another marker of the deeper dysfunction rather than a catalyst for transformation.

    As you evaluate your organization’s AI adoption journey, resist the temptation to focus exclusively on technology decisions. Instead, examine the cultural fingerprints your choices reveal. What does your AI strategy say about how you value transparency and clarity? What does your change management investment reveal about whether you genuinely trust and support employees? What does your leadership’s personal engagement with AI technology communicate about whether transformation is authentic or performative? The answers to these questions will predict your AI success far more reliably than any technology selection ever could. Your organization’s relationship with AI is simply a more legible version of who you already are.

    Sources

    [1] AI’s Cultural Impact: New Data Reveals Leadership Makes … https://blog.perceptyx.com/ais-cultural-impact-new-data-reveals-leadership-makes-the-difference

    [2] The Human Side of AI: Balancing Adoption with Employee … https://www.greatplacetowork.ca/en/articles/the-human-side-of-ai-balancing-adoption-with-employee-trust

    [3] How employee trust in AI drives performance and adoption https://newsroom.wiley.com/press-releases/press-release-details/2025/How-employee-trust-in-AI-drives-performance-and-adoption/default.aspx

    [4] How Organizational Culture Shapes AI Adoption and … https://www.shrm.org/topics-tools/flagships/ai-hi/how-organizational-culture-shapes-ai-adoption-success

    [5] AI transformation and culture shifts https://www.deloitte.com/us/en/what-we-do/capabilities/applied-artificial-intelligence/articles/build-ai-ready-culture.html

    [6] AI-First Leadership: Embracing the Future of Work https://www.harvardbusiness.org/insight/ai-first-leadership-embracing-the-future-of-work/

    [7] AI Adoption: Driving Change With a People-First Approach https://www.prosci.com/blog/ai-adoption

    [8] Post #5: Reimagining AI Ethics, Moving Beyond Principles to … https://www.ethics.harvard.edu/blog/post-5-reimagining-ai-ethics-moving-beyond-principles-organizational-values

    [9] AI Strategy & Culture: Driving Successful AI Transformation https://www.mhp.com/en/insights/blog/post/ai-strategy-and-culture

    [10] Beyond the Model: Unlocking True Organizational Value … https://www.transformlabs.com/blog/beyond-the-model-unlocking-true-organizational-value-from-ai

    [11] How AI is Reshaping Company Culture and Values https://cerkl.com/blog/ai-in-company-culture/

    [12] AI in the workplace: A report for 2025 https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential-at-work

    [13] The Role of Artificial Intelligence in Digital Transformation https://online.hbs.edu/blog/post/ai-digital-transformation

    [14] The Impact Of AI On Company Culture And How To … https://www.forbes.com/sites/larryenglish/2023/05/25/the-impact-of-ai-on-company-culture-and-how-to-prepare-now/

    [15] The Role of Leadership in Driving AI Implementation https://ewfinternational.com/the-role-of-leadership-in-driving-ai-implementation/

    [16] What is the Role of Culture in AI Adoption Success? https://www.thehrobserver.com/technology/what-is-the-role-of-culture-in-ai-adoption-success/

    [17] AI AND ORGANIZATIONAL CULTURE https://www.gapinterdisciplinarities.org/res/articles/(136-140)-AI-AND-ORGANIZATIONAL-CULTURE-NAVIGATING-THE-INTERSECTION-OF-TECHNOLOGY-AND-HUMAN-VALUES-20250705150542.pdf

    [18] 8 Ways Leaders Can Help Organizations Unlock AI https://www.iiba.org/business-analysis-blogs/8-ways-leaders-can-help-organizations-unlock-ai/

    [19] The Role of Organizational Culture Under Disruption Severity https://ieomsociety.org/proceedings/bangladesh2024/219.pdf

  • The Importance of Information Governance in ISO 27001 Certification

    In today’s digital landscape, organizations face increasing pressure to protect sensitive information, comply with regulatory requirements, and maintain stakeholder trust. ISO/IEC 27001, the international standard for information security management systems (ISMS), provides a structured framework for managing and securing information assets. However, successful implementation and certification of ISO 27001 depend heavily on a foundational discipline: Information Governance (IG). This essay explores the critical role of Information Governance in ISO 27001 certification, highlighting its influence on risk management, compliance, accountability, and organizational resilience.

    Understanding Information Governance

    Information Governance refers to the strategic framework and set of policies, procedures, and controls that ensure effective management of information throughout its lifecycle. It encompasses data quality, privacy, security, retention, and compliance, aligning information practices with business objectives and legal obligations. Unlike traditional IT governance, IG is cross-functional, involving stakeholders from legal, compliance, records management, IT, and business units.

    ISO 27001 Overview

    ISO 27001 is a globally recognized standard that specifies the requirements for establishing, implementing, maintaining, and continually improving an ISMS. Its core objective is to protect the confidentiality, integrity, and availability of information by applying a risk management process. The standard includes clauses related to leadership, planning, support, operation, performance evaluation, and improvement, along with Annex A controls covering areas such as access control, cryptography, physical security, and incident management.

    The Intersection of IG and ISO 27001

    While ISO 27001 provides the framework for securing information, Information Governance ensures that the information being protected is accurate, relevant, and managed in accordance with legal and business requirements. The synergy between IG and ISO 27001 is essential for several reasons:

    1. Establishing Clear Ownership and Accountability

    Information Governance defines roles and responsibilities for data stewardship, ownership, and custodianship. This clarity is crucial for ISO 27001, which requires documented responsibilities for information security. Without IG, organizations may struggle to identify who is accountable for specific data sets, leading to gaps in security controls and audit trails.

    2. Enhancing Risk Management

    Effective IG provides visibility into the types of information held, their value, and associated risks. This insight is vital for ISO 27001’s risk assessment and treatment processes. By categorizing data based on sensitivity and criticality, organizations can prioritize security controls and allocate resources efficiently. IG also supports the identification of legal and regulatory risks, which must be addressed in the ISMS.

    3. Supporting Compliance and Legal Requirements

    ISO 27001 requires organizations to consider legal, regulatory, and contractual obligations related to information security. Information Governance ensures that data handling practices comply with laws such as GDPR, HIPAA, and industry-specific regulations. It facilitates the creation of policies for data retention, disposal, and breach notification, which are essential for both compliance and certification.

    4. ISO 27001 Data Retention Policies and Data Disposition

    A critical aspect of Information Governance within ISO 27001 is the management of data retention and disposition. Clause A.8.3 of ISO 27001 (Annex A) specifically addresses the handling of information during its lifecycle, including secure disposal when no longer needed.

    • Data Retention Policies: These policies define how long different types of data should be retained based on legal, regulatory, and business requirements. Information Governance ensures that retention schedules are documented, justified, and consistently applied. Retaining data longer than necessary increases risk and cost, while premature deletion can lead to compliance violations or loss of valuable information.
    • Data Disposition: Secure and verifiable disposal of data is essential to prevent unauthorized access or data breaches. ISO 27001 requires organizations to implement controls that ensure data is destroyed in a manner that renders it unrecoverable. IG supports this by establishing procedures for data sanitization, physical destruction of media, and audit trails to verify compliance.

    Together, these practices help organizations reduce data sprawl, minimize exposure to risk, and demonstrate due diligence during audits. They also align with broader privacy principles such as data minimization and purpose limitation.

    5. Improving Data Quality and Integrity

    Poor data quality undermines the effectiveness of security controls and decision-making. IG promotes data accuracy, consistency, and completeness, which are critical for ISO 27001’s control objectives. For example, access control policies depend on reliable user and asset information. IG also supports audit readiness by ensuring that records are complete and traceable.

    6. Facilitating Documentation and Evidence

    ISO 27001 certification requires extensive documentation, including policies, procedures, risk assessments, and control implementation records. Information Governance provides the structure for managing documentation, version control, and retention schedules. It ensures that evidence required for audits is readily available and trustworthy.

    7. Driving Cultural Change and Awareness

    Information Governance fosters a culture of accountability and ethical information use. This cultural shift complements ISO 27001’s emphasis on leadership and awareness. Training programs, communication strategies, and performance metrics developed under IG can be leveraged to promote security awareness and employee engagement in the ISMS.

    8. Enabling Continuous Improvement

    Both IG and ISO 27001 advocate for continuous improvement. IG provides mechanisms for monitoring data usage, policy compliance, and emerging risks. These insights feed into ISO 27001’s performance evaluation and improvement processes, enabling organizations to adapt to changing threats and business needs.

    Practical Steps to Integrate IG into ISO 27001 Projects

    To maximize the benefits of Information Governance during ISO 27001 certification, organizations should consider the following steps:

    • Conduct an Information Inventory: Identify and classify all information assets, including structured and unstructured data, to understand what needs protection.
    • Define Governance Policies: Establish policies for data ownership, access, retention, and disposal aligned with legal and business requirements.
    • Engage Stakeholders: Involve cross-functional teams in governance and security planning to ensure comprehensive coverage and buy-in.
    • Implement Data Lifecycle Management: Manage information from creation to disposal, ensuring security controls are applied at each stage.
    • Monitor and Audit: Use IG tools to track data usage, policy compliance, and anomalies, feeding insights into the ISMS.
    • Align Metrics and KPIs: Develop performance indicators that reflect both governance and security objectives, supporting continuous improvement.

    Challenges and Considerations

    Integrating Information Governance into ISO 27001 projects is not without challenges. Organizations may face resistance to change, lack of resources, or fragmented data environments. Overcoming these hurdles requires strong leadership, clear communication, and a phased implementation approach. Leveraging frameworks such as COBIT, ITIL, and NIST can also support IG maturity and alignment with ISO 27001.

    Conclusion Information Governance is not merely a supporting function in ISO 27001 certification—it is a strategic enabler. By ensuring that information is well-managed, compliant, and aligned with business goals, IG lays the foundation for a robust and effective ISMS. Organizations that embrace IG as part of their ISO 27001 journey are better equipped to manage risks, demonstrate compliance, and build trust with stakeholders. In an era where information is both an asset and a liability, integrating governance and security is not just best practice—it is essential.

  • Information Governance in 2025: Board-Level Oversight of Cybersecurity, Artificial Intelligence, Privacy, and Risk Management


    Executive Summary

    As 2025 unfolds, Boards of Directors find themselves at the epicenter of an unprecedented convergence of digital innovation, regulatory challenge, and emergent risk. The stakes have never been higher, nor the expectations more complex. From the relentless pace of cyber threats powered by artificial intelligence, to the ethical and regulatory labyrinth of AI deployment, and the rapidly expanding universe of privacy compliance and information governance, boards are being called to exercise a level of vigilance and strategic leadership rarely demanded in prior decades.

    This report provides a deep analysis of the evolving best practices and emerging concerns in information governance that demand board-level attention. Structured around the four cornerstone themes – Cybersecurity, Artificial Intelligence, Privacy, and Risk Management – it explores not only the foundational responsibilities but also the nuanced ways in which modern board and committee oversight are evolving to match a volatile and hyper-connected environment.

    A central takeaway is that effective information governance is no longer a matter of compliance or technology alone. Rather, it is a strategic differentiator, a significant driver of competitive advantage, and a critical measure of ESG performance and reputational resilience. This report draws on extensive recent data and trends, recommendations from leading consultancy and governance organizations, and lessons from regulatory and litigation developments across multiple jurisdictions.


    Table: Board-Level Information Governance – Key Themes, Responsibilities, and Concerns (2025)

    ThemeKey Board ResponsibilitiesPrimary Concerns / Strategic PrioritiesCommon Oversight Structures / Notes
    CybersecurityStrategic risk oversight; CISO-board relationship; incident response and scenario planning; tech acumenEvolving threat landscape, regulatory compliance, third-party exposure, ESG impactTech/Risk committees, subcommittees, full board
    AIAI governance charters; ethics/bias oversight; scenario planning; board education and expertiseEthical risk, regulatory lag, ROI, innovation vs. controls, stakeholder trustAudit, risk, technology/AI, ESG, full board
    PrivacyPrivacy-by-design oversight; program quality; compliance posture; evidence of accountabilityRegulatory change, consumer trust, cross-border frameworks, litigation riskRisk, compliance, audit, tech/digital, full board
    Risk MgmtDynamic, multidisciplinary committee evolution; cross-functional reporting; crisis readinessNew interlinkages (cyber, privacy, ESG), third-party risk, emerging risksStanding risk/audit/tech committees, new hybrids

    Table Elaboration

    This summary table distills the sprawling imperatives facing boards in 2025, but the true import of each cell comes alive only in detailed analysis that follows. For instance, the primary concern of regulatory compliance referenced for all four areas is no longer static: regulations covering AI and privacy are in a dynamic state of flux in the US, Canada, UK, EU, and throughout Asia-Pacific, demanding boards to move beyond reactive compliance to proactive oversight and scenario-based risk planning. Similarly, competitive advantage through cybersecurity, long considered an IT ambition, is now a C-suite and board KPI strongly linked to trust, revenue, and ESG performance.


    Cybersecurity: From Technical Silo to Board-Level Strategic Differentiator

    The Shifting Landscape of Cyber Risk

    In 2025, the volume and sophistication of cyber threats continues to soar, fueled by nation-state actors, criminal syndicates, and the democratization of attacker tools via generative AI. Cyber risk is now routinely listed as the top threat by global board members, executives, and risk practitioners across industries – from financial services to manufacturing, energy, retail, and healthcare. The Allianz Risk Barometer again ranked cyber incidents as the leading global business risk for the year, with executive surveys echoing these concerns.

    Cyber risk is not static: new attack vectors – such as supply chain attacks, ransomware-as-a-service, deepfake-driven phishing, and attacks on AI models – demand that boards move far beyond mere technical literacy or delegated oversight. Digital resilience, encompassing both defense and recovery capabilities, is now fundamental to business continuity and valuation protection.

    Board – CISO Relationship and Dynamics

    The relationship between the Chief Information Security Officer (CISO) and the board has become a defining factor in long-term cyber resilience. Leading practice requires the CISO to be empowered as a business enabler, regularly briefing the board not just on technical controls and incident counts, but on business risk metrics, threat intelligence, scenario planning, and strategic investment priorities.

    Boards are also expected to bridge the typical technical-business gap, demanding CISOs present data and stories in terms directors understand: risk to operations, financial exposure, and compliance with board-defined risk appetite. Open, recurring engagement (rather than sporadic, compliance-driven reporting) is cited as a keystone of mature board–CISO partnerships.

    Boards are increasingly including cybersecurity skills in director selection matrices. Yet even boards without such expertise must ensure regular education sessions and access to external advisors to bridge technical knowledge gaps and oversee cybersecurity in strategic, enterprise terms.

    Cybersecurity as Competitive Advantage and ESG Consideration

    A central trend is the recognition of cybersecurity as a direct competitive advantage and ESG (Environmental, Social, Governance) pillar. Boards are expected to demonstrate how investments in cybersecurity not only protect against loss, but also enable trust, drive resilience, and support sustainable business models. Cyber resilience has become an explicit expectation for investors and rating agencies when evaluating a company’s long-term prospects.

    Third-Party and Supply Chain Risk

    Boards now oversee third-party and supply chain cyber risk as a central concern, given the cascade effect of high-profile breaches (e.g., SolarWinds, NotPetya) and regulatory focus on operational resilience. Effective oversight requires boards to confirm that management maintains a risk-ranked inventory of critical third-party relationships, enforces rigorous due diligence, and applies continuous monitoring and incident response playbooks that extend beyond the organization’s boundaries.

    Incident Response and Scenario Planning

    Crisis readiness is a non-negotiable board obligation. Modern boards demand that incident response plans are documented, tested through tabletop simulations, and mapped to the organization’s risk appetite. Active scenario planning – envisioning not only likely attacks but unthinkable “black swan” events – enables directors to test assumptions, clarify roles, and build organizational muscle for rapid, values-aligned response in a crisis.

    Summary of Board-Level Cybersecurity Best Practices

    • Regular, direct engagement between board and CISO, with mutual understanding of business risk
    • Technology and cyber expertise included in board composition, or accessible via advisors/committees
    • Cybersecurity considered in ESG frameworks, reporting, and board KPIs
    • Supply chain and third-party cyber risk integrated into enterprise TPRM programs, with board visibility
    • Incident response and crisis scenario planning as standing board agenda items, with periodic simulations
    • Cyber risk issues integrated into strategy discussions – not sidelined as technical/IT topics
    • Independent external maturity assessments and regular benchmarking against industry peers
    • Board oversight of breach notification, ransom response, and public/stakeholder communications.

    Artificial Intelligence (AI): Board Challenges and Strategic Governance in the Age of Intelligence

    AI Oversight Moves Center Stage

    AI governance has emerged in 2025 not just as a compliance challenge but as a board-level strategic dilemma. The proliferation and mainstreaming of generative AI, machine learning, and automation tools across nearly every business function have confronted directors with the need to govern in a domain characterized by rapid innovation, regulatory uncertainty, and substantial risk of operational, legal, and reputational harm.

    Recent survey data indicate remarkable growth in board commitment: as of 2024, over 31% of S&P 500 companies disclosed some level of board or committee oversight of AI, and 20% had at least one director with AI expertise (up from 11% in 2022). Disclosure and committee charters covering AI are trending upwards across sectors, with particular growth in the Information Technology, Communications, and Consumer Discretionary industries.

    AI Governance: Committee Structures and Reporting Lines

    Boards employ a variety of oversight models. Best practice is sector- and organization-specific but often involves expanding the remit of the risk, audit, or technology committees, or establishing new AI/technology committees to clarify accountability for AI risk, ethics, and strategy.

    Notably, shareholder activism in 2024 – 2025 pushed several large companies (banks, retailers, technology giants) to amend committee charters for explicit AI oversight and to improve disclosure around board-level AI governance. There is a marked trend toward assigning strategic AI oversight responsibilities at the full board level – indicating increasing recognition of AI’s pervasive impact beyond IT or compliance domains.

    AI Ethics Boards and Cross-Functional Governance

    Explicit AI Ethics and Review Boards remain relatively rare (about 2–3% adoption in S&P 500) but are increasing, especially in industries with direct AI R&D or significant customer-facing automation. These entities report, variably, to the board, the risk committee, or the CEO and serve as multi-disciplinary risk/ethics panels – a practice recommended by both global regulators (OECD, UNESCO, NIST, ISO) and leading governance consultancies.

    Key responsibilities for such entities include reviewing potential model bias, explainability, safety, privacy compliance, and the implementation of ethical guidelines – often in response to regulatory frameworks or sector standards such as the EU AI Act, AI Risk Management Frameworks, and jurisdictional voluntary codes of conduct.

    AI Risk, Bias, and the Board’s Fiduciary Duties

    AI’s sheer velocity and complexity dramatically increase the risk that unmonitored automation could escalate small model errors into systemic issues (ranging from discriminatory outcomes to operational, legal, or IP exposure) before human controls intervene. Leading boards are therefore building scenario analysis, real-time monitoring, and bias/ethical auditing into their oversight scope, often through the development of internal AI Centers of Excellence reporting directly to the board’s risk committee rather than to IT or business units.

    Board Education and Expertise in AI

    Most directors still report only foundational knowledge of AI risks and governance models, with only a minority having hands-on experience or technical backgrounds – a consistent finding across US, Canadian, and European surveys. Training and board refreshment with AI-savvy members or advisor participation in targeted committee meetings are cited as effective strategies to raise overall board fluency.

    Shareholder Proposals and External Pressure

    AI-focused shareholder proposals grew more than fourfold in 2024, spanning requests for impact assessments, ethical use commitments, transparency on data sourcing, and amendments to board committee charters. These proposals are appearing in sectors well outside “Big Tech”: finance, retail, telecoms, media, consumer services, and even industrials and oil – signaling that investors see AI preparedness and governance as key to long-term corporate value.

    AI Regulation: Fast-Evolving, Broad in Scope

    Boards are advised to track not only sectoral and jurisdiction-specific regulations (e.g., EU AI Act, Canadian AIDA, US state initiatives, industry codes) but also voluntary global standards (OECD Principles, NIST AI RMF, UNESCO, ISO/IEC 42001, IEEE 7000) that shape responsible AI and can be used to demonstrate “reasonable care” in oversight.

    Canadian and US boards, for instance, face a patchwork of privacy and AI mandates, with provinces like Québec already enforcing disclosure and transparency on AI-based decision-making and hiring, and national regulators encouraging voluntary adoption of governance frameworks, pending federal law finalization.

    AI Governance as Innovation Catalyst and Reputational Shield

    Despite the risk focus, directors and shareholders recognize AI’s potential to drive value transformation, operational efficiency, and market differentiation. Best-in-class board AI governance supports, rather than stifles, responsible innovation by setting clear strategic objectives, establishing agile and transparent oversight, and nurturing an experimental, evidence-based learning culture for rapid adaptation to AI-driven disruption.


    Privacy: From Compliance Backwater to Boardroom Priority

    Explosion of Privacy Regulation and Litigation

    Privacy is now firmly a board-level concern – driven by the explosive growth of global regulations (GDPR, CCPA, CPRA, Law 25 in Québec, and others), the proliferation of class-action lawsuits and shareholder litigation following breaches, and steep increases in statutory penalties for non-compliance. In Canada, the emergence of privacy class actions (often following data breaches), the introduction of administrative monetary penalties, and the increasing focus of proxy advisors and ESG rating agencies mean boards are directly accountable for privacy program effectiveness.

    Similar dynamics are evident globally, with regulatory scrutiny reaching new heights -from the EU’s record fines to US SEC enforcement actions, and new requirements for data minimization, transparency, and expanded individual rights.

    Board’s Duty of Oversight and Accountability

    Boards are required as part of their fiduciary and statutory duties to exercise active, documented oversight of data privacy – ensuring the company understands the purposes and methods of its data collection, is transparent with stakeholders, conducts periodic risk and program reviews, and maintains a robust compliance posture with all relevant laws.

    A privacy-by-design approach – embedding privacy safeguards into systems and processes at the outset rather than retrofitting them in response to incidents – is cited as board-level leading practice, and demonstrably strengthens consumer trust and compliance outcomes.

    Privacy by Design and Board Responsibility

    Successful integration of privacy into product, process, and business model design requires direct top-management and board support. Case studies across sectors find that proactive leadership, cross-functional risk and assessment processes, and executive education foster a culture in which privacy is a strategic asset, rather than a legal burden.

    Boards must ensure that privacy programs are adequately funded, systematically reviewed, and that the company can furnish evidence of compliance to regulators on demand. This includes comprehensive documentation, mapping of personal data flows, PIA (Privacy Impact Assessment) procedures, and quarterly targeted staff training – especially for teams handling sensitive information or managing third-party contracts.

    Privacy Risk Governance and Third-Party Exposure

    With data-driven business models reliant on a vast ecosystem of vendors and cloud partners, boards face growing exposure from privacy failures in third parties. Effective board oversight now includes vendor data privacy due diligence, monitoring, and clear contract compliance requirements aligned with applicable frameworks (e.g., NIST, ISO 27001).

    Emerging Privacy Priorities for Boards

    • Defining and regularly reviewing the company’s purpose for personal information collection and retention
    • Mandating and overseeing privacy-by-design principles, and documenting evidence of program maturity
    • Preparing for class action and derivative lawsuits linked to breach of privacy oversight duty
    • Integrating privacy into board-level crisis and reputation management, with ready response plans for data incidents
    • Supporting CPO/DPO roles and regular reporting to the board on privacy metrics and program status
    • Ensuring strategic alignment between business growth initiatives and privacy compliance impacts.

    Risk Management: The Evolution of Board Risk Committees and Oversight Mechanisms

    The New Face of Board-Level Risk Committees

    In 2025, the evolution of risk management at board level has become as dramatic as the changes in the risk landscape itself. The traditional, quarterly risk review has proven utterly insufficient – modern committees must now operate as dynamic, multidisciplinary teams that actively monitor, anticipate, and lead rapid organizational responses to digital, regulatory, ESG, and geopolitical risks.

    Charters are being updated to codify information governance, cybersecurity, AI ethics, and data privacy oversight as standing committee responsibilities. Sector exemplars include adding digital, privacy, or AI specialists to risk and audit committees, and establishing cross-functional links with internal audit, legal, compliance, technology, and ESG teams.

    Board Composition and Digital/Tech Acumen

    Digital acumen in the boardroom is no longer a “nice to have.” There is clear evidence that boards with members experienced in data science, cybersecurity, privacy law, and AI governance are better equipped to challenge management, ask the right questions, and maintain actionable risk registers that reflect real operational exposure.

    Further, the need to bridge expertise gaps is driving the formation of technology/innovation committees, the routine use of external experts, and targeted director education programs focused on technology and risk developments.

    Dynamic Crisis and Incident Scenario Planning

    Agile risk oversight requires readiness not only for likely risks but also for “unthinkable” black swan events – high-impact, low-likelihood scenarios such as systemic supply chain attacks, major regulatory shocks, or catastrophic AI failures. Boards are increasingly engaging in scenario planning, crisis simulations, and after-action reviews to pressure-test assumptions and foster organizational learning at all levels.

    ESG and Information Governance Integration

    Boards are aligning risk management with ESG priorities, including the development of “double materiality” frameworks that consider both financial and broader societal impacts of cyber, privacy, and AI risks. Disclosure of material IT, cyber, and AI risks in ESG reporting is quickly becoming a global investor expectation and board-level requirement. Supply chain, third-party, and data privacy exposures must be incorporated into both sustainability and financial risk reporting frameworks.

    Standing Committee Charters and Best Practices

    Charters for risk, audit, and/or technology committees should be reviewed and updated annually, specifically to clarify oversight roles for cyber, AI, privacy, third-party, and ESG-related risks. Boards should also ensure sufficient skills diversity within committees and establish clear escalation, reporting, and review cadence for major incidents and audits.


    Key Cross-Theme Trends and Strategic Priorities

    1. Board Education and Digital Fluency

    The pace of technology-driven change means that continuous director education is an existential necessity. Boards must regularly schedule briefings on cybersecurity, AI risk, privacy law, and best-practice governance frameworks, leveraging both internal and external subject matter expertise.

    2. Crisis Scenario Planning and Response Integration

    Crisis management is now everybody’s business, starting in the boardroom. The board’s defined role is strategic oversight, supporting management but never overstepping into operational boundaries in a crisis. This requires regular crisis simulations, not just tabletop exercises, along with clear post-mortem analysis and integration of lessons learned into the ongoing risk agenda.

    3. Proactive Engagement with Shareholders and Regulatory Developments

    Directors should anticipate ongoing activism and regulatory scrutiny around all facets of information governance. Proactive disclosure, transparent principles for AI and data management, and regular engagement with investors on ESG, privacy, and risk management practices are now seen as foundational to reputational and capital resilience.

    4. Third-Party/Supply Chain, Cross-Functional, and Multijurisdictional Risks

    Third-party risk is not only a cyber issue – it is integral to privacy, AI, and ESG frameworks. Boards must oversee holistic TPRM programs, cross-functional reporting, and real-time risk dashboards that reflect the interconnectedness and global reach of today’s digital ecosystem.

    5. Embedding Privacy and Security by Design

    Privacy by Design and Security by Design are no longer slogans but board-level imperatives, required by regulation and expected by customers and investors. Boards must oversee the proactive integration of these principles into product development, operations, M&A due diligence, and supply chain practices.


    Conclusion: The Board’s Imperative in Information Governance for 2025 and Beyond

    The expanding scope, scale, and intensity of information governance challenges require a new mindset at the board level. Directors must operate as strategic navigators, charting a course between innovation and risk, compliance and value creation, adaptation and assurance.

    Best-in-class boards will distinguish themselves not through mere compliance, but by embedding information governance into the very fabric of their organization’s culture, strategy, and stakeholder relationships. This means fostering mature CISO-board partnerships, demanding robust cross-functional risk governance, committing to director education, and setting a visible example that positions the company as a trusted, resilient, and ethical participant in the digital economy.

    The age of digital transformation, AI ascendency, and global data flows will only accelerate. Boards of Directors – in their composition, committee structures, charters, and everyday practices – will define their organizations’ capacity to not only weather the coming waves of risk and regulation, but to seize the unprecedented opportunities they bring.


  • Minimum Viable Governance: A Lean Blueprint for Integrated Oversight in the Age of AI and Data

    By Robert Gerbrandt

    Why Governance Needs a Reboot

    Governance has long been the backbone of organizational integrity. Yet, as digital transformation accelerates, legacy models—often siloed and process-heavy—struggle to keep pace. The rise of AI introduces new ethical and operational risks, while data proliferation demands tighter controls and clearer ownership. Meanwhile, information governance remains critical for privacy, security, and regulatory compliance. The Minimum Viable Governance (MVG) framework responds to this complexity with a “minimum viable” philosophy: deliver essential governance outcomes with the least possible burden. It’s not about cutting corners—it’s about cutting clutter.

    Intersection and Commonality between MVG, MVD, MVP, and MVE

    The concept of “minimum viable” has been widely adopted across various domains, each with its unique focus but sharing a common philosophy of delivering essential value with minimal resources. This section explores the intersection and commonality between Minimum Viable Governance (MVG), Minimal Viable Design (MVD), Minimal Viable Product (MVP), and Minimal Viable Experience (MVE).

    Minimum Viable Governance (MVG): MVG aims to deliver essential governance outcomes with the least possible burden. It focuses on integrating the foundational elements of Information Governance, Data Governance, and AI Governance into a cohesive, lightweight, and scalable framework.

    Minimal Viable Design (MVD): MVD emphasizes creating the simplest design that meets the core needs of users. It prioritizes functionality and user experience while avoiding unnecessary complexity. The goal is to deliver a design that is both effective and efficient, ensuring that users can achieve their objectives with minimal friction.

    Minimal Viable Product (MVP): MVP is a development strategy that focuses on creating a product with just enough features to satisfy early adopters. The primary objective is to gather feedback and validate the product idea before investing significant resources. MVP allows for iterative improvements based on user feedback, ensuring that the final product meets market demands.

    Minimal Viable Experience (MVE): MVE extends the concept of MVP by emphasizing the overall user experience. It aims to deliver a complete and satisfying experience with the minimum necessary features. MVE ensures that users not only find the product functional but also enjoyable and engaging.

    Commonality and Intersection:

    1. Lean Approach: All four concepts adopt a lean approach, focusing on delivering essential value with minimal resources. They prioritize efficiency and effectiveness, ensuring that the core objectives are met without unnecessary complexity.

    2. User-Centric: Each concept places a strong emphasis on the user. Whether it’s governance, design, product development, or experience, the primary goal is to meet the needs and expectations of the user.

    3. Iterative Improvement: The philosophy of iterative improvement is central to all four concepts. By starting with a minimal viable version, they allow for continuous feedback and refinement, ensuring that the final outcome is aligned with user needs and market demands.

    4. Scalability: Each concept is designed to be scalable. They provide a foundation that can be expanded and enhanced over time, allowing organizations to grow and adapt as needed.

    In summary, MVG, MVD, MVP, and MVE share a common philosophy of delivering essential value with minimal resources. They emphasize a lean, user-centric approach that allows for iterative improvement and scalability. By adopting these principles, organizations can achieve their goals more efficiently and effectively.

    The MVG Framework: Six Core Components

    MVG integrates the foundational elements of Information Governance, Data Governance, and AI Governance into six actionable components. Each is designed to be lightweight, scalable, and easy to implement.

    1. Lightweight Governance Committee

    Rather than multiple governance bodies, MVG proposes a single, cross-functional committee with representatives from each domain. This team meets quarterly and during incident reviews, owning and updating core policy documents. The goal: maintain oversight without over-administration.

    2. Role Assignment & Accountability

    MVG emphasizes clear ownership. Each business unit assigns data stewards for information systems, datasets, and AI models. These stewards log decisions in simple formats—spreadsheets suffice—ensuring traceability and accountability.

    3. Risk and Compliance Registry

    A central, living document tracks key assets, associated risks, mitigation strategies, and regulatory requirements. Updates occur only when assets change significantly or incidents arise. This registry becomes the heartbeat of governance visibility.

    4. Policy Checklists

    Forget 50-page manuals. MVG uses three one-page checklists—one each for information, data, and AI. These are reviewed during onboarding and major changes, ensuring consistent policy application without overwhelming staff.

    5. Incident & Issue Response Protocol

    MVG simplifies incident management with a basic form capturing what happened, which asset was affected, actions taken, and ownership. Significant events are reviewed by the committee, and lessons learned are logged for organizational growth.

    6. Lightweight Audit & Training

    Annual self-assessments against the checklists and risk registry replace exhaustive audits. One mandatory awareness session per year ensures staff remain informed and engaged.

    Governance Roles: Lean but Accountable

    MVG’s role structure is designed for clarity and agility. Key roles include:

    ·        Governance Committee: Owns policies, reviews incidents, and ensures compliance.

    ·        CIO (Optional): Provides strategic oversight and escalates issues.

    ·        Domain Stewards: Manage quality, documentation, and ethical use across data, AI, and information.

    ·        Owners: Assign business value, approve changes, and manage lifecycle rules.

    ·        Custodians/Technical Leads: Implement controls and manage infrastructure.

    ·        Compliance Champions: Advise on regulations and support policy development.

    ·        Business Users: Use assets responsibly and participate in training.

    This structure ensures direct accountability while remaining lean enough for rapid deployment.

    Measuring What Matters: MVG KPIs

    Governance without measurement is guesswork. MVG introduces eight practical KPIs to track performance across domains:

    1.      Training Completion: Percentage of staff completing annual governance training.

    2.      Ownership Coverage: Proportion of assets with assigned stewards.

    3.      Checklist Review Rate: Percentage of changes reviewed against policy checklists.

    4.      Governance Incident Rate: Number and severity of reported breaches or risks.

    5.      Audit Readiness: Share of assets with up-to-date documentation.

    6.      Explainability/Fairness: Coverage of ethical assessments for high-impact AI models.

    7.      Value Realization: Governance initiatives delivering measurable business outcomes.

    8.      Time-to-Incident Resolution: Median time from issue detection to resolution.

    These KPIs are outcome-oriented, enabling organizations to assess governance effectiveness without excessive reporting.

    Scaling MVG: Maturity-Based Targets

    MVG is designed to grow with the organization. KPI targets are calibrated across three maturity levels:

    –        Initial/Baseline: Focus on establishing accountability (e.g., 60–70% training completion).

    –        Developing: Push for broader coverage and process closure (e.g., 80–90% asset stewardship).

    –        Advanced/Optimized: Achieve full coverage and emphasize business value (e.g., 100% checklist reviews, <2-week incident resolution).

    Thresholds use red/yellow/green indicators to signal performance, making governance health easy to monitor and communicate.

    Why MVG Works

    MVG succeeds because it aligns governance with organizational agility. It avoids the trap of over-engineering, instead focusing on:

    –        Speed: Rapid deployment with minimal setup.

    –        Clarity: Defined roles and responsibilities.

    –        Flexibility: Scalable across teams and maturity levels.

    –        Value: Direct linkage to business outcomes.

    For startups, MVG offers a governance “starter kit.” For enterprises, it provides a way to streamline and unify fragmented oversight efforts.

    Putting MVG into Practice

    To implement MVG, organizations should:

    1.      Form the Governance Committee: Identify representatives and schedule quarterly meetings.

    2.      Assign Stewards: Map key assets and assign owners.

    3.      Create the Risk Registry: Use a shared document to track risks and controls.

    4.      Develop Checklists: Draft one-pagers for each domain.

    5.      Launch Training: Schedule annual sessions and track completion.

    6.      Monitor KPIs: Set initial targets and review quarterly.

    Tools like spreadsheets, shared drives, and basic forms are sufficient. The emphasis is on function over form.

    Conclusion: Governance for the Real World

    In an era of accelerating innovation and increasing scrutiny, governance must evolve. The MVG framework offers a pragmatic path forward—one that respects the complexity of modern organizations while embracing the simplicity needed for action. By integrating the essentials of information, data, and AI governance into a cohesive, minimum viable model, MVG empowers organizations to govern smarter, not harder.