Author: robgerbrandt

  • Information Governance Is Your Secret AI Weapon

    For the past several years, the dominant narrative around artificial intelligence has been clear: success depends on hiring elite technical talent and deploying cutting-edge models. Organizations have raced to recruit data scientists, machine learning engineers, and AI specialists, often at significant cost.

    But this narrative is incomplete—and increasingly, it’s proving to be wrong.

    Across industries, a pattern is emerging. Companies invest heavily in AI tools and talent, only to encounter stalled initiatives, unreliable outputs, and limited business impact. The postmortems rarely point to model failure. Instead, they reveal a more fundamental issue: the underlying information is fragmented, inconsistent, poorly labeled, or simply untrustworthy.

    In other words, the problem isn’t intelligence. It’s information.

    This is where a long-overlooked function comes into focus: Information Governance (IG). Traditionally viewed as a compliance or records management discipline, IG is now poised to become one of the most critical enablers of AI success. Organizations that recognize this shift—and act on it—will gain a significant competitive advantage.

    The Hidden Dependency of AI

    AI systems, particularly those based on machine learning and large language models, are often described as transformative technologies. But their capabilities are fundamentally dependent on the data they consume.

    AI does not create insight from nothing. It identifies patterns, relationships, and signals within existing information. If that information is flawed, the outputs will be too—often in ways that are subtle, difficult to detect, and costly to correct.

    Consider a few common scenarios:

    • A customer service AI trained on inconsistent historical records produces contradictory responses, eroding customer trust.
    • A predictive model built on incomplete operational data generates inaccurate forecasts, leading to poor strategic decisions.
    • A generative AI tool surfaces sensitive or outdated information because retention and classification policies were never enforced.

    In each case, the failure is not technical—it is informational.

    Yet most AI strategies continue to prioritize model performance over data quality. Organizations invest in more sophisticated algorithms while leaving the underlying information environment largely unchanged.

    This imbalance is not sustainable.

    From Data Problem to Governance Problem

    It is tempting to frame these challenges as “data quality issues.” But that framing understates the root cause.

    Data quality is not a one-time cleanup exercise. It is the result of sustained practices—how information is created, labeled, stored, shared, and retired over time. These practices are governed (or not) by policies, standards, and accountability structures.

    In other words, data quality is a governance outcome.

    Information Governance professionals have long operated in this space. Their work includes:

    • Defining classification schemas and metadata standards
    • Establishing retention and disposition policies
    • Managing information lifecycle processes
    • Ensuring compliance with legal and regulatory requirements
    • Reducing duplication and improving consistency across systems

    Historically, these activities were seen as risk mitigation—important, but not strategic.

    AI changes that equation.

    When machines rely on enterprise information to generate outputs, make recommendations, or automate decisions, the quality and consistency of that information directly affect business performance. Governance is no longer just about avoiding risk; it is about enabling value.

    The Standardization Advantage

    One of the most underappreciated contributions of Information Governance is standardization.

    AI systems thrive on consistency. They depend on structured inputs, predictable formats, and clearly defined relationships. Variability—whether in naming conventions, document structures, or metadata—introduces noise that degrades performance.

    Information Governance professionals specialize in reducing that variability.

    They ask questions that are foundational for AI but often overlooked in technical implementations:

    • What does this data element mean across different business units?
    • Are we using consistent terminology and definitions?
    • How do we ensure that similar content is categorized in the same way?
    • What metadata is required to make this information usable by machines?

    These are not purely technical questions. They require cross-functional alignment, policy enforcement, and an understanding of how information flows through the organization.

    Without this standardization, even the most advanced AI systems struggle to produce reliable results.

    With it, organizations can unlock significantly more value from their existing data.

    The Cost of Ignoring Governance

    Many organizations treat Information Governance as a secondary concern—something to address after AI tools are deployed. This approach introduces several risks.

    First, it leads to inefficient use of resources. Teams spend time cleaning and reconciling data for individual projects, rather than addressing systemic issues. This creates duplication of effort and slows down innovation.

    Second, it undermines trust. When AI outputs are inconsistent or difficult to explain, stakeholders become skeptical. Adoption stalls, regardless of the underlying technology’s potential.

    Third, it increases exposure to regulatory and reputational risk. AI systems can inadvertently surface sensitive, outdated, or non-compliant information if governance controls are weak.

    Finally, it limits scalability. Without a governed information foundation, each new AI initiative becomes a bespoke effort, requiring extensive data preparation. This prevents organizations from achieving the economies of scale that make AI truly transformative.

    In contrast, organizations that invest in governance upfront create a reusable foundation that supports multiple use cases.

    Elevating IG from Back Office to Strategy

    To fully realize the benefits of AI, organizations must rethink the role of Information Governance.

    This begins with a shift in perception. IG should not be positioned as a compliance function operating on the periphery of the business. It should be integrated into core strategic initiatives, particularly those involving AI and data.

    Practically, this means:

    • Involving IG professionals early in AI project planning, not after deployment
    • Aligning governance frameworks with AI use cases and business objectives
    • Establishing clear ownership and accountability for information assets
    • Investing in tools and processes that support scalable governance

    It also requires a cultural change. Business leaders and technical teams must recognize that information quality is a shared responsibility, not something that can be delegated entirely to a single function.

    Bridging the Gap Between IG and AI Teams

    One of the key challenges is the disconnect between Information Governance and AI teams.

    These groups often operate in silos, with different priorities, vocabularies, and success metrics. IG professionals focus on control, consistency, and compliance. AI teams prioritize speed, experimentation, and performance.

    Bridging this gap requires deliberate effort.

    Organizations can start by creating cross-functional teams that include both IG and AI expertise. These teams can collaborate on defining data standards, identifying critical information assets, and designing governance processes that support innovation rather than hinder it.

    Another effective approach is to establish shared metrics. For example, instead of measuring AI success solely in terms of model accuracy, organizations can include metrics related to data quality, consistency, and governance compliance.

    This alignment ensures that governance is seen as an enabler of AI performance, not a constraint.

    A Practical Illustration

    Consider a global organization implementing an AI-powered knowledge assistant for its employees.

    In a typical approach, the focus might be on selecting the right model, integrating it with existing systems, and fine-tuning its responses. Governance considerations are addressed later, if at all.

    The result is predictable: the assistant provides inconsistent answers, pulls from outdated documents, and occasionally surfaces sensitive information. User trust declines, and the initiative struggles to deliver value.

    Now consider the same initiative with strong Information Governance involvement from the outset.

    IG professionals work with business units to:

    • Identify authoritative sources of information
    • Apply consistent classification and metadata
    • Remove redundant and outdated content
    • Define access controls and retention policies

    The AI team then builds on this curated, standardized information base.

    The outcome is markedly different. The assistant delivers more accurate, reliable, and contextually appropriate responses. Users trust it, adoption increases, and the organization realizes tangible productivity gains.

    The technology is the same. The difference is the information.

    Building a Governance-First AI Strategy

    Organizations looking to strengthen their AI outcomes can take several concrete steps:

    • Assess the current state of information governance, including policies, processes, and tools.
    • Identify high-value AI use cases and map the information dependencies for each.
    • Prioritize governance improvements that directly support these use cases.
    • Invest in metadata, classification, and lifecycle management capabilities.
    • Foster collaboration between IG, data, and AI teams.

    Importantly, this is not about slowing down AI initiatives. It is about making them more effective and sustainable.

    A governance-first approach may require more upfront effort, but it reduces rework, accelerates scaling, and increases the likelihood of success.

    The Competitive Implication

    As AI adoption becomes widespread, the differentiating factor will not be access to technology. Models and tools are becoming increasingly commoditized.

    The real advantage will lie in how effectively organizations manage and leverage their information.

    Companies with well-governed, high-quality information environments will be able to deploy AI faster, achieve better results, and adapt more quickly to new opportunities.

    Those without this foundation will continue to struggle, regardless of how much they invest in technology.

    In this context, Information Governance is not just a support function—it is a strategic capability.

    Rethinking What Matters

    The current focus on AI talent and tools is understandable. These elements are visible, tangible, and easy to measure.

    Information Governance, by contrast, operates behind the scenes. Its impact is often indirect, making it easy to overlook.

    But as organizations gain more experience with AI, the importance of information quality becomes impossible to ignore.

    The question is no longer whether governance matters. It is whether organizations are willing to elevate it to the level required for AI success.

    Those that do will discover that their “secret weapon” has been there all along—quietly shaping the quality of their information, and now, the effectiveness of their intelligence.

  • The Crucial Connection Between Content and Context in AI Success

    Artificial intelligence isn’t just fueled by data; it’s shaped by meaning. Behind every smart recommendation, accurate prediction, or fluent conversation lies a delicate interplay between content — the data AI learns from — and context — the environment or situation in which that data is interpreted. When either falters, trust and effectiveness collapse.

    Why Content Matters

    AI systems learn from the content they consume: words, numbers, images, videos, and interactions. This content can be structured data (spreadsheets, databases, sensor readings) or unstructured data (emails, documents, social media posts).
    Structured data offers clarity and consistency, making it ideal for pattern recognition and analytics. Unstructured data, by contrast, captures real-world nuance, emotion, and cultural tone — but is messy and prone to ambiguity. AI models depend on learning meaningful structure from this chaos.

    When content is incomplete, inaccurate, or biased, the resulting models inherit those flaws. An AI model trained on unbalanced datasets or poorly cleaned text may build distorted worldviews, misinterpret user intent, or make inaccurate predictions. Garbage in, as the old saying goes, means garbage out — but in AI, it also means misleading context.

    Context: The Invisible Compass

    Context transforms raw content into insight. It helps AI understand why something matters, not just what it is. For instance, the phrase “cold” can describe weather, an illness, or emotional detachment — the meaning depends on contextual signals.
    Effective AI captures these signals from time, location, intent, and prior interactions. Context makes an assistant conversationally fluent, a recommendation engine personally relevant, and an autonomous system situationally aware.

    However, poor or missing content erodes this ability. An AI model lacking diverse examples may misread cultural references. Insufficient metadata can strip away temporal or geographic cues. When content fails, context becomes guesswork — and that’s where trust breaks down.

    Risks and Challenges in AI Training Data

    Building robust AI means navigating several content-level challenges:

    • Bias and imbalance: Overrepresentation of certain viewpoints or demographics leads to unfair outputs.
    • Noise and inconsistency: Unstructured data often contains contradictions, slang, and errors that obscure meaning.
    • Data fragmentation: Disconnected silos make it hard to encode context consistently across models.
    • Privacy and ethics: Extracting context from user data must respect consent and confidentiality.

    Records and retention: As organizations accumulate vast amounts of data for AI training, the temptation to retain it indefinitely grows. Over-retention increases the likelihood of outdated, irrelevant, or personally identifiable information remaining in datasets, which can degrade performance and raise storage, privacy, and compliance risks. In financial use cases, stale records are especially dangerous: an AI model trained on old account balances, expired credit terms, or prior-market conditions may generate forecasts or recommendations that no longer reflect reality, leading to flawed decisions and loss of trust. Effective governance — including lifecycle management and timely disposal — helps ensure that an AI’s “memory” is fresh, ethical, and aligned with present-day realities.

    Solving these issues requires disciplined data curation, diverse sampling, and transparency throughout the model lifecycle. Increasingly, AI developers combine structured datasets (for precision) with enriched unstructured inputs (for relevance) to balance clarity with depth.

    Building Contextually Aware AI

    AI success now hinges on a dual mandate: content integrity and context competence. Clean, representative data enables learning, but contextual understanding ensures meaningful application. Techniques like multimodal learning, fine-tuning, and prompt engineering are pushing toward models that grasp intent and perspective as well as fact.

    Ultimately, AI that comprehends both content and context doesn’t just process information — it interprets reality. And in that nuance lies the path to reliable, human-centered intelligence.

  • A Tale of Two Nations (apologies to Charles Dickens)

    or

    How the U.S. and UAE Are Taking Different Paths to National AI Leadership

    Artificial Intelligence has become a defining force in global competitiveness, national security, and economic transformation. But while many countries are racing to develop national AI strategies, few contrast as sharply—and as strategically—as the United States and the United Arab Emirates.

    Both nations view AI as essential to their future, yet their frameworks reflect very different priorities, governance models, and national ambitions. Here’s a breakdown of how these two AI powerhouses are charting distinct paths.

    1. Strategic Vision: Competition vs. Transformation

    United States: National Security & Global Dominance

    The U.S. positions AI as a geopolitical imperative, aiming to secure unquestioned technological dominance in the face of global competition. This is reflected in the America’s AI Action Plan, which frames AI as a driver of economic strength and national power.

    U.S. strategy also emphasizes a decentralized, innovation‑driven ecosystem focused on frontier R&D, talent, and infrastructure.

    United Arab Emirates: National Transformation & Economic Diversification

    The UAE’s AI ambition is deeply tied to long‑term development goals like UAE Centennial 2071, positioning AI as a catalyst for economic diversification and nationwide digital transformation.

    Its strategy aims to make the UAE a global AI leader by 2031 through early government adoption and broad sector-wide integration.

    2. Governance Models: Federal Guardrails vs. Centralized Leadership

    United States

    The U.S. approach seeks a unified national AI standard to avoid a fragmented landscape of 50 different state laws.

    Agencies must also comply with national security‑focused frameworks defining prohibited and high‑impact AI uses.

    United Arab Emirates

    The UAE employs a centralized governance model led by the Minister of State for Artificial Intelligence and the UAE AI Council.

    Its strategy is designed around AI‑friendly regulation, safe deployment, and seamless integration of AI into public services.

    3. Technology Priorities: Frontier Models vs. Sector Innovation

    United States

    The U.S. prioritizes:

    • Frontier AI model development

    • Semiconductors and advanced computing

    • National AI evaluations and testing ecosystems


    These priorities reflect the focus on innovation, defense, and securing supply chains against adversaries.

    United Arab Emirates

    The UAE emphasizes:

    • AI in transportation, healthcare, education, energy, and smart cities

    • Data infrastructure and emerging sectors

    • Major investments in AI data centers and national AI education programs


    This breadth highlights a whole‑of‑economy transformation approach.

    4. Workforce and Talent: Upskilling vs. Nation‑Building

    United States

    The U.S. focuses on preparing an AI‑ready workforce, supporting workers affected by automation, and expanding STEM pipelines.

    United Arab Emirates

    The UAE is building a national talent ecosystem from the ground up—including mandatory AI education from kindergarten to university, and initiatives to attract global AI researchers.

    5. Role of Government: Regulator vs. Implementer

    United States

    The government acts primarily as a regulator and enabler, establishing guardrails while allowing private industry to drive innovation.

    United Arab Emirates

    The UAE government serves as a primary driver of AI adoption—rolling out AI across public services to accelerate nationwide adoption.

    6. Global Positioning: Defense Leadership vs. AI Destination Hub

    United States

    The U.S. uses AI as an instrument of global leadership—setting standards, countering adversaries, and protecting sensitive technologies.

    United Arab Emirates

    The UAE aims to become a global AI destination by attracting talent, investment, and international partnerships.

    Final Thoughts: Two Nations, Two Distinct Futures

    The U.S. is building an AI framework rooted in security, competition, and technological leadership, while the UAE is crafting one focused on national transformation, economic diversification, and government‑driven innovation.

    Both paths offer valuable lessons—and together, they demonstrate how AI strategies can reflect a nation’s identity, priorities, and long‑term vision.

  • The AI Governance Mirage: Why Standalone Functions Are Doomed to Fail

    AI governance does not belong in its own silo; it must be treated as a core dimension of corporate governance, risk, and strategy, especially in financial services where fragmentation of accountability already amplifies risk.

    Why Standalone AI Governance Will Fail

    Attempts to build separate AI ethics or AI governance teams have already shown how easily they become marginalized, under‑resourced, and politically weak. Studies of tech‑sector ethics teams find they often lack authority, struggle to secure leadership backing, and are consulted too late in the lifecycle to change critical decisions, turning “AI governance” into a veneer rather than a mechanism of control. In financial services, where complex risk and compliance functions already exist, adding a separate AI governance committee typically duplicates effort, confuses who owns which decisions, and diffuses responsibility when something goes wrong. The result is more dashboards and policies, but less clarity about which executive is truly accountable for AI‑driven credit decisions, fraud detection, or AML alerts.

    The Fragmentation Trap in Financial Services

    Financial institutions are already grappling with overlapping regimes for privacy, cybersecurity, AML, KYC, and model risk management, each with its own committee, policy set, and control inventory. Regulators in key markets explicitly place ultimate responsibility for AI and model risk at the level of the board and senior management, not in specialist AI bodies, underscoring that AI risk is a variant of enterprise and model risk, not a new category exempt from existing accountability lines. Guidance such as OSFI’s model risk expectations, MAS proposals, and UK PRA supervisory statements all stress integrated, enterprise‑wide processes for model inventories, validation, and escalation, rather than parallel AI‑only channels. When institutions respond by spinning up standalone AI governance councils with their own taxonomies and policies, they create gaps between AI use in underwriting or trading and the established processes for risk, compliance, and audit.

    Why Tools and Committees Are Not Enough

    Vendors now market AI governance platforms and stress‑testing toolkits that promise to organize models, monitor drift, and automate documentation. Evidence from early adopters shows that while these tools can surface bias, performance issues, and policy gaps, they cannot resolve the fundamental questions of who decides, who pays, and who is held to account when AI systems conflict with strategic, legal, or ethical constraints. Research on failed AI and analytics initiatives repeatedly points to data silos, misaligned incentives, and weak decision rights—not the absence of specialized AI governance software—as the drivers of project failure. In practice, software‑centric and committee‑centric approaches risk becoming another compliance checkbox, disconnected from capital allocation, product design, and frontline behavior.

    Integrating AI into Existing Governance

    The more realistic path is to augment existing data governance, information security, model risk, and corporate governance structures to explicitly encompass AI, rather than founding a pure AI governance function. Boards should fold AI into their existing governance frameworks, asking how AI reshapes strategy, risk appetite, and culture, and then allocating oversight across current committees instead of creating yet another body competing for time and authority. In banking, that means embedding AI considerations into model risk policies under existing SR 11‑7‑style frameworks, aligning AI data use with current privacy and data‑management standards, and extending cybersecurity practices to cover model integrity and adversarial threats. A central coordinating function for AI can and should exist, but as an integrating mechanism that connects risk, compliance, technology, and business lines, not as a separate “AI governance office” with ambiguous power and overlapping mandates.

    A Contrarian Call to the C‑Suite

    For C‑suite and AI leaders, the contrarian move is to dismantle the idea that AI governance is a parallel universe requiring its own institutional layer. Instead, they should: align AI initiatives with corporate strategy and risk appetite at the board level, upgrade existing policies and committees to explicitly cover AI, and charge line executives—rather than AI specialists—with end‑to‑end accountability for AI‑enabled processes. In financial services in particular, the goal is not to stand up yet another steering committee and policy set, but to recognize AI as inseparable from core credit, fraud, AML, and customer‑experience decisions, governed through the same structures that already determine who gets a loan, who is monitored, and what happens when controls fail. Done this way, AI governance stops being a standalone activity and becomes what it must be to succeed: a reframing of corporate governance and strategy for an algorithmic era.

    Like this? Show your support at buymeacoffee.com/RobGerbrandt

  • Data Is Infrastructure: Why the Way You Think About Information Determines Your AI Future

    The organizations winning with artificial intelligence aren’t just building better models. They’re rethinking data itself — not as a byproduct of operations, but as the foundational layer upon which enterprise value is constructed.

    There is a telling paradox at the heart of most enterprise AI programs today. Companies invest heavily in the latest large language models, hire armies of data scientists, and commission ambitious transformation roadmaps — only to discover that their initiatives stall not at the frontier of computation, but at the foundation of information. The data isn’t ready. It never was.

    This is not a technical failure. It is a conceptual one.

    For decades, organizations have treated data as exhaust — a residual output of transactional systems, stored out of regulatory obligation and occasionally queried for backward-looking reports. Even as analytics matured and the language shifted toward “data-driven decision-making,” the underlying mental model remained one of data as asset: something to be accumulated, perhaps monetized, but fundamentally passive.

    Artificial intelligence renders that model obsolete. In an AI-powered enterprise, data must be understood as infrastructure — as foundational, as load-bearing, and as deliberately engineered as roads, power grids, or communication networks. This reframing carries profound implications for how organizations govern, invest in, and derive value from their information assets.

    What It Means to Treat Data as Infrastructure

    Infrastructure, by definition, is not an end in itself. It is the enabling substrate upon which productive activity depends. Public roads do not generate economic value directly; they make commerce, labor mobility, and supply chains possible. Similarly, when data is treated as infrastructure, it is positioned not as an output to be archived, but as a continuous, accessible, governed foundation that enables AI systems, analytical workloads, and decision-making processes to function reliably at scale.

    This framing applies with equal force to both structured and unstructured data — and the distinction matters enormously. Structured data, the rows and columns of transactional systems, CRMs, and ERPs, has long been the subject of governance frameworks and data warehousing investments. It is relatively well-understood, even if still imperfectly managed. Unstructured data — the documents, emails, call transcripts, contracts, images, sensor logs, and social signals that constitute an estimated 80 to 90 percent of all enterprise information — has largely been left ungoverned, unsearchable, and underutilized.

    Generative AI changes that calculus entirely. The most transformative enterprise AI applications — retrieval-augmented generation, intelligent document processing, knowledge management systems, AI-assisted legal review — draw precisely from unstructured sources. The organization that cannot govern, catalog, and reliably serve its unstructured data is operating its AI strategy on an unstable foundation. Treating all data, regardless of form, as critical infrastructure is no longer aspirational. It is a competitive imperative.

    Four Key Benefits of the Infrastructure Paradigm

    1. Compounding Returns on Governance Investment

    Infrastructure thinking introduces a logic of compounding returns that is absent from asset-based approaches to data. When a city invests in a road network, every subsequent business, resident, and service built along that network benefits from the original investment. The same dynamic applies to data. Organizations that invest in building a governed, well-documented, semantically consistent data foundation do not simply improve today’s analytics workload — they create a platform on which every future AI application can stand without rebuilding from scratch.

    In practice, this means that a robust data catalog, a unified metadata framework, and a coherent information governance policy pay dividends far beyond their initial use case. The first AI model trained on a well-structured enterprise knowledge base is merely the beginning. Subsequent models, agents, and applications inherit the same trusted substrate, dramatically reducing time-to-production and the cost of AI development. Organizations that treat data governance as one-time compliance theater — rather than as ongoing infrastructure maintenance — find themselves rebuilding the foundation with every new initiative.

    2. Trustworthiness as a Systemic Property

    One of the most pernicious risks of enterprise AI is the deployment of systems that produce confident, fluent, and wrong outputs. Hallucinations in large language models, biased predictions in machine learning systems, and stale context in retrieval pipelines all trace back, in significant part, to data quality failures. The infrastructure paradigm addresses this risk not through model-level fixes, but through systemic data trustworthiness.

    When data is treated as infrastructure, quality, lineage, freshness, and access control become engineering requirements, not afterthoughts. Just as civil engineers specify load tolerances for a bridge, data engineers must specify and enforce quality tolerances for the information that AI systems consume. This includes unstructured sources — a document repository with inconsistent versioning, outdated contracts, or unsanctioned shadow files is as dangerous to an AI-powered workflow as corrupted records in a relational database. Trustworthy AI, in the final analysis, is downstream of trustworthy data.

    3. Regulatory Resilience and Auditability

    Across industries and jurisdictions, the regulatory environment around AI is tightening rapidly. The EU AI Act, evolving SEC guidance on AI in financial services, HIPAA’s implications for AI in healthcare, and a growing patchwork of data privacy legislation all impose obligations that are fundamentally informational in nature. Regulators want to know: What data trained this model? What data informed this decision? Who had access to what, and when?

    Organizations that have adopted the infrastructure paradigm are far better positioned to answer these questions. A governed data environment — one with comprehensive lineage tracking, access audit logs, retention schedules, and documented classification schemes — does not merely satisfy compliance requirements. It creates the evidentiary foundation necessary to defend AI-assisted decisions under legal or regulatory scrutiny. Information governance, long regarded as a cost center, becomes a strategic liability shield. The organizations that invested in it before the regulatory wave arrived will spend far less managing it than those scrambling to retrofit governance onto ungoverned data estates.

    4. Enabling Responsible AI Democratization

    AI’s most significant organizational impact may not come from a handful of sophisticated, centrally built models, but from the broad democratization of AI capabilities across business functions. Sales teams building their own retrieval tools, compliance officers using AI-assisted contract review, product managers querying unstructured customer feedback at scale — this is where AI transforms organizational velocity. But this democratization is only safe when it rests on a governed infrastructure layer.

    When every team draws from a common, well-governed data foundation, the democratization of AI tools does not fragment into a sprawl of inconsistent, conflicting, or non-compliant data practices. Federated access models, data mesh architectures, and self-service analytics platforms all depend, in the end, on the same principle: a trusted infrastructure layer that business users can draw from without needing to be data engineers themselves. This is the organizational analogue of public utilities — the individual user does not need to understand how the power grid works to reliably turn on the lights.

    Four Key Challenges Organizations Face in Adoption

    1. The Legacy Debt Problem

    Most large organizations carry decades of accumulated technical and informational debt. Data is siloed across incompatible systems. Metadata is absent, inconsistent, or wrong. Unstructured content is scattered across file shares, email archives, collaboration platforms, and business applications with no coherent taxonomy. Shadow data — copies, extracts, and derivatives created outside formal IT governance — proliferates in ways that are difficult to inventory, let alone govern.

    Treating this environment as infrastructure is not simply a matter of policy declaration. It requires substantial and often painful rationalization work: decommissioning legacy systems, migrating and reconciling historical data, establishing authoritative sources of truth for key information domains, and building cataloging capabilities for content that has never been described or classified. This is expensive, slow, and unglamorous — precisely the kind of foundational investment that struggles to compete for capital allocation against projects with more visible near-term returns. Leadership alignment on the long-term value of data infrastructure investment is a genuine organizational challenge, not merely a technical one.

    2. The Governance-Agility Tension

    There is a persistent and legitimate tension between the rigor that infrastructure-grade governance demands and the speed that modern AI development requires. Data science teams operating under competitive pressure to ship AI capabilities are often frustrated by governance processes they experience as friction — lengthy data access approvals, restrictive classification policies, slow procurement cycles for data tooling. The result is a well-documented organizational dynamic in which AI teams route around governance rather than working within it.

    This tension cannot be resolved by governance teams simply asserting authority, nor by AI teams circumventing oversight in the name of innovation. It requires the design of governance frameworks that are genuinely enabling rather than merely restrictive — frameworks that establish clear, fast-path access procedures for classified data types, that build trust through transparency rather than enforcement alone, and that treat data scientists and AI engineers as partners in the governance mission rather than as compliance risks to be managed. Getting this balance right requires cultural change as much as process design, and cultural change is always the hardest kind.

    3. The Unstructured Data Frontier

    While structured data governance has at least a mature body of practice to draw from, unstructured data governance remains, for most organizations, terra incognita. The tools are less standardized, the taxonomies less established, and the scale is orders of magnitude larger. A global enterprise may have hundreds of millions of documents, images, and communications that have never been classified, cataloged, or assessed for sensitivity. Bringing this content under governance sufficient to make it safely and reliably usable for AI represents a genuinely novel organizational and technical challenge.

    The risks are significant and bidirectional. Under-governing unstructured data exposes organizations to privacy violations, intellectual property leakage, and AI systems that inadvertently surface confidential or regulated content. Over-restricting it, however, forecloses the AI use cases — in knowledge management, customer intelligence, and regulatory compliance — that represent some of the highest-value applications of the technology. Calibrating this balance requires new capabilities in content intelligence, automated classification, and sensitive data detection that most organizations are only beginning to build.

    4. Talent and Organizational Design

    Building and maintaining data infrastructure at enterprise scale requires a workforce profile that most organizations do not yet have in sufficient depth. Data architects who understand AI workload requirements, information governance professionals fluent in both regulatory frameworks and machine learning pipelines, data engineers capable of building reliable unstructured data serving layers — these are scarce, expensive, and often poorly positioned within organizational hierarchies that have not caught up to the strategic importance of the function.

    Beyond individual talent, the organizational design question is equally vexing. Data infrastructure, by its nature, must serve the entire enterprise — but enterprises are organized into business units with local priorities, local budgets, and local incentives. The tension between centralized governance and decentralized ownership is not new, but AI amplifies its stakes considerably. Federated data mesh models offer one architectural response, but they require levels of cross-functional trust, standardization, and coordination that are genuinely difficult to sustain. Many organizations find themselves caught between a centralized model that moves too slowly and a decentralized one that produces fragmentation — and the path between these failure modes is neither obvious nor easy.

    The Strategic Imperative

    The infrastructure metaphor is not merely rhetorical. Infrastructure investment has always required organizations — and societies — to accept near-term costs for long-term, shared, compounding benefits. The interstate highway system was not built because any single company needed it. It was built because collective investment in foundational enablement creates conditions for prosperity that no individual actor could generate alone.

    The data infrastructure challenge facing today’s enterprises is structurally similar. No single AI model justifies the full investment required to build a governed, semantically rich, continuously maintained information substrate across structured and unstructured sources. But the aggregate of every AI application the organization will ever build, deploy, and scale — that enterprise justifies the investment many times over.

    The executives who understand this first will not just build better AI. They will build the kind of information foundation that makes their organizations structurally harder to compete against. In the age of AI, data infrastructure is not an IT concern. It is a strategic moat.

    The organizations that treat data as infrastructure today are building the highways that will determine who competes — and who doesn’t — in the economy of tomorrow.

    Like this? Show your support at buymeacoffee.com/RobGerbrandt

  • AI Adoption as a Mirror: What Your Organization’s AI Strategy Reveals About Its Culture

    The artificial intelligence revolution sweeping through enterprise corridors often feels like a technology narrative—one centered on models, algorithms, and computational prowess. Yet the real story of AI adoption is far more human. How an organization approaches artificial intelligence implementation reveals far more about its fundamental character than any corporate mandate or technology roadmap ever could. AI adoption is not merely a technical choice; it is a cultural artifact that exposes the organization’s deepest values, leadership philosophy, and capacity for change. In other words, culture is king.

    Consider the striking data from recent organizational research: only 17% of organizations have achieved leadership-driven AI adoption with clear strategies and policies, while a striking 31% have no formal AI adoption strategy whatsoever. This fragmentation is not a technology failure. It reflects underlying cultural realities—whether an organization genuinely prioritizes strategic alignment, whether it trusts its workforce, and whether it has built the institutional muscle for deliberate transformation. Two companies with identical AI budgets and identical talent may produce radically different outcomes based on the cultural substrate in which they plant their technological seeds.[1]

    The Alignment Imperative: Strategy as Cultural Statement

    When leadership establishes a coherent AI strategy with clear goals and transparent communication, something profound happens. Organizations with structured AI adoption report 62% of employees as fully engaged—a figure that rises exponentially compared to haphazard approaches. This is not incidental. A well-articulated AI strategy telegraphs something essential about organizational culture: that leadership thinks systematically, communicates transparently, and believes employees deserve clarity about institutional direction.[1]

    Conversely, organizations that allow AI adoption to unfold chaotically—with 21% of employees independently experimenting without guidance—inadvertently reveal a culture characterized by ambiguity, fragmented decision-making, and perhaps most troublingly, limited trust in centralized leadership. The absence of formal strategy is not neutrality; it is a cultural statement about organizational values and priorities.

    The research here is unambiguous. Organizations with leadership-driven AI strategies are 7.9 times more likely to believe AI has positively impacted workplace culture compared to those without formal approaches. Critically, employees in these structured environments are 1.2 times more likely to report that their teams work well together. Strategy, then, functions as a cultural artifact—a mechanism through which organizations signal whether they believe in purposeful direction, collective alignment, and the power of coordinated action. In this sense, a mature AI strategy is as much a statement about who you are as it is about what technology you will deploy.[1]

    Trust as the Cornerstone of Technological Integration

    Perhaps no single factor predicts AI adoption success more reliably than organizational trust. Research from Great Place to Work reveals that organizations with high employee trust experience 8.5 times higher revenue per employee and 3.5 times stronger market performance. Yet trust does not emerge from technology budgets. It emerges from leadership behavior, transparency, and the cultural foundation leaders have spent years constructing.[2]

    When employees encounter AI without trust-building infrastructure, they interpret the technology through a lens of anxiety. A Wiley-published study examining employee trust configurations identified four distinct patterns: full trust (high cognitive and emotional trust), full distrust, uncomfortable trust (high cognitive but low emotional trust), and blind trust. The research revealed that these configurations trigger different behaviors—some employees detail their digital footprints openly, while others engage in data manipulation, confinement, or withdrawal. These responses create what researchers termed a “vicious cycle” in which degraded data inputs undermined AI performance, further eroding trust.[3]

    This cycle is rooted in organizational culture. In low-trust environments, AI adoption becomes a threat rather than an opportunity. Employees fear job displacement, question motives, and withhold engagement. In contrast, organizations that have cultivated genuine trust relationships experience what might be called “positive reciprocity”—employees extend benefit of the doubt, engage openly, and contribute their best thinking to AI initiatives. Trust, therefore, is not a nice-to-have ancillary to AI adoption. It is the cultural prerequisite that determines whether an organization’s AI investments generate value or waste.

    Adaptability: The Cultural Dimension That Determines Success

    One of the most revealing aspects of an organization’s culture is its relationship to change. Organizational research identifies adaptability as the single most important cultural dimension for predicting AI adoption success. Organizations that demonstrate flexibility, comfort with ambiguity, and willingness to experiment tend to integrate AI successfully. Those that prize control, stability, and predictability struggle.[4]

    This is precisely where culture functions as a mirror. An organization’s capacity for adaptability reflects decades of accumulated decisions about how leaders have responded to disruption, whether employees have been encouraged to voice concerns, and whether failure has been treated as a learning opportunity or a career liability. Rigid, control-oriented cultures typically cannot mobilize the psychological flexibility required for AI adoption because that flexibility was never culturally embedded in the first place.

    Organizations that invest substantially in change management recognize this reality implicitly. Research demonstrates that organizations investing in structured change management approaches are 1.6 times more likely to report that AI initiatives exceed expectations, and more than 1.5 times as likely to achieve desired outcomes. This statistical relationship reflects a cultural shift: the organization is signaling that it values deliberate transition management, employee support systems, and human-centered implementation. The change management investment is not about the technology; it is about whether leadership has the cultural consciousness to understand that transformation is fundamentally a human challenge.[5]

    Leadership Visibility as Cultural Signal

    How leaders personally engage with AI reveals the organization’s authentic cultural values. In organizations where executives visibly use AI tools, model experimentation, and discuss the technology openly, a message cascades through the organization: innovation is not peripheral; it is central to how we work. When leaders remain distant from actual AI engagement—delegating implementation entirely to technical teams—they communicate implicitly that AI is a specialist concern, not an organizational imperative.

    Research on AI-first leadership from Harvard Business School identifies a critical responsibility: leaders must bridge the gap between technological capabilities and strategic goals, foster cultures that embrace AI’s potential to complement human creativity, and demonstrate that they themselves understand and value the technology. This leadership visibility is not theater. It is a fundamental cultural signal about whether an organization’s values align with technological transformation or whether that transformation is being tolerated rather than embraced.[6]

    The Culture-Skills-Trust Triangle

    Successful AI adoption rests on three pillars, all of which are fundamentally cultural in nature. First, organizations must develop clear strategic communication about AI’s role and purpose. Second, they must invest substantially in skills development and ongoing learning. Third, they must proactively address trust, security, and ethical concerns with transparency and governance frameworks. Each of these pillars reflects cultural commitments: to clarity over ambiguity, to employee development over static competence requirements, and to ethical integrity over expedient corner-cutting.[7]

    Organizations that excel in all three dimensions typically share a distinctive cultural profile: they are transparent about challenges, they invest in people as their most important asset, and they view ethical considerations as non-negotiable strategic factors rather than compliance burdens. In contrast, organizations that struggle typically demonstrate cultural patterns of opacity, underinvestment in human development, and tendency to treat ethics as an afterthought.

    The Uncomfortable Truth: Fear as a Cultural Diagnostic

    Interestingly, research reveals that high-achieving organizations report more than twice the amount of AI-related fear compared to low-achieving organizations. This counterintuitive finding offers profound insight into organizational culture. High-achieving organizations express fear because they have ambitious AI visions and understand the genuine stakes involved. But critically, these organizations pair that fear with two cultural characteristics: they express little desire to reduce headcount through automation, and they invest substantially in training and change management. Their fear becomes a catalyst for responsible action rather than a justification for avoidance.[5]

    Organizations that express minimal AI-related fear often demonstrate a more troubling cultural pattern: either they lack strategic ambition (and therefore have little to fear), or they have adopted a posture of denial about genuine risks and disruptions. In this sense, measured concern about AI is actually a cultural strength—a signal of organizational maturity and realistic assessment.

    Conclusion: What Your AI Strategy Says About You

    An organization’s approach to artificial intelligence adoption ultimately functions as a cultural X-ray. It reveals whether leadership thinks systematically or reactively, whether trust has been built or eroded, whether the organization values adaptability or prizes control, and whether employee development is treated as an investment or an obligation.

    The most successful organizations approach AI not as a technology problem but as a cultural challenge. They recognize that implementation success depends on transparent strategy, leadership visibility, change management infrastructure, trust-building mechanisms, and systems that empower employees while maintaining ethical governance. These organizations do not adopt AI despite their culture; they adopt AI because their culture makes adoption possible.

    The inverse is equally true. Organizations that struggle with AI adoption rarely suffer from technical limitations. They suffer from cultural constraints—fragmented decision-making, low trust, rigid hierarchies, limited communication, and underinvestment in people. In these organizations, AI becomes another marker of the deeper dysfunction rather than a catalyst for transformation.

    As you evaluate your organization’s AI adoption journey, resist the temptation to focus exclusively on technology decisions. Instead, examine the cultural fingerprints your choices reveal. What does your AI strategy say about how you value transparency and clarity? What does your change management investment reveal about whether you genuinely trust and support employees? What does your leadership’s personal engagement with AI technology communicate about whether transformation is authentic or performative? The answers to these questions will predict your AI success far more reliably than any technology selection ever could. Your organization’s relationship with AI is simply a more legible version of who you already are.

    Sources

    [1] AI’s Cultural Impact: New Data Reveals Leadership Makes … https://blog.perceptyx.com/ais-cultural-impact-new-data-reveals-leadership-makes-the-difference

    [2] The Human Side of AI: Balancing Adoption with Employee … https://www.greatplacetowork.ca/en/articles/the-human-side-of-ai-balancing-adoption-with-employee-trust

    [3] How employee trust in AI drives performance and adoption https://newsroom.wiley.com/press-releases/press-release-details/2025/How-employee-trust-in-AI-drives-performance-and-adoption/default.aspx

    [4] How Organizational Culture Shapes AI Adoption and … https://www.shrm.org/topics-tools/flagships/ai-hi/how-organizational-culture-shapes-ai-adoption-success

    [5] AI transformation and culture shifts https://www.deloitte.com/us/en/what-we-do/capabilities/applied-artificial-intelligence/articles/build-ai-ready-culture.html

    [6] AI-First Leadership: Embracing the Future of Work https://www.harvardbusiness.org/insight/ai-first-leadership-embracing-the-future-of-work/

    [7] AI Adoption: Driving Change With a People-First Approach https://www.prosci.com/blog/ai-adoption

    [8] Post #5: Reimagining AI Ethics, Moving Beyond Principles to … https://www.ethics.harvard.edu/blog/post-5-reimagining-ai-ethics-moving-beyond-principles-organizational-values

    [9] AI Strategy & Culture: Driving Successful AI Transformation https://www.mhp.com/en/insights/blog/post/ai-strategy-and-culture

    [10] Beyond the Model: Unlocking True Organizational Value … https://www.transformlabs.com/blog/beyond-the-model-unlocking-true-organizational-value-from-ai

    [11] How AI is Reshaping Company Culture and Values https://cerkl.com/blog/ai-in-company-culture/

    [12] AI in the workplace: A report for 2025 https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential-at-work

    [13] The Role of Artificial Intelligence in Digital Transformation https://online.hbs.edu/blog/post/ai-digital-transformation

    [14] The Impact Of AI On Company Culture And How To … https://www.forbes.com/sites/larryenglish/2023/05/25/the-impact-of-ai-on-company-culture-and-how-to-prepare-now/

    [15] The Role of Leadership in Driving AI Implementation https://ewfinternational.com/the-role-of-leadership-in-driving-ai-implementation/

    [16] What is the Role of Culture in AI Adoption Success? https://www.thehrobserver.com/technology/what-is-the-role-of-culture-in-ai-adoption-success/

    [17] AI AND ORGANIZATIONAL CULTURE https://www.gapinterdisciplinarities.org/res/articles/(136-140)-AI-AND-ORGANIZATIONAL-CULTURE-NAVIGATING-THE-INTERSECTION-OF-TECHNOLOGY-AND-HUMAN-VALUES-20250705150542.pdf

    [18] 8 Ways Leaders Can Help Organizations Unlock AI https://www.iiba.org/business-analysis-blogs/8-ways-leaders-can-help-organizations-unlock-ai/

    [19] The Role of Organizational Culture Under Disruption Severity https://ieomsociety.org/proceedings/bangladesh2024/219.pdf

  • The Importance of Information Governance in ISO 27001 Certification

    In today’s digital landscape, organizations face increasing pressure to protect sensitive information, comply with regulatory requirements, and maintain stakeholder trust. ISO/IEC 27001, the international standard for information security management systems (ISMS), provides a structured framework for managing and securing information assets. However, successful implementation and certification of ISO 27001 depend heavily on a foundational discipline: Information Governance (IG). This essay explores the critical role of Information Governance in ISO 27001 certification, highlighting its influence on risk management, compliance, accountability, and organizational resilience.

    Understanding Information Governance

    Information Governance refers to the strategic framework and set of policies, procedures, and controls that ensure effective management of information throughout its lifecycle. It encompasses data quality, privacy, security, retention, and compliance, aligning information practices with business objectives and legal obligations. Unlike traditional IT governance, IG is cross-functional, involving stakeholders from legal, compliance, records management, IT, and business units.

    ISO 27001 Overview

    ISO 27001 is a globally recognized standard that specifies the requirements for establishing, implementing, maintaining, and continually improving an ISMS. Its core objective is to protect the confidentiality, integrity, and availability of information by applying a risk management process. The standard includes clauses related to leadership, planning, support, operation, performance evaluation, and improvement, along with Annex A controls covering areas such as access control, cryptography, physical security, and incident management.

    The Intersection of IG and ISO 27001

    While ISO 27001 provides the framework for securing information, Information Governance ensures that the information being protected is accurate, relevant, and managed in accordance with legal and business requirements. The synergy between IG and ISO 27001 is essential for several reasons:

    1. Establishing Clear Ownership and Accountability

    Information Governance defines roles and responsibilities for data stewardship, ownership, and custodianship. This clarity is crucial for ISO 27001, which requires documented responsibilities for information security. Without IG, organizations may struggle to identify who is accountable for specific data sets, leading to gaps in security controls and audit trails.

    2. Enhancing Risk Management

    Effective IG provides visibility into the types of information held, their value, and associated risks. This insight is vital for ISO 27001’s risk assessment and treatment processes. By categorizing data based on sensitivity and criticality, organizations can prioritize security controls and allocate resources efficiently. IG also supports the identification of legal and regulatory risks, which must be addressed in the ISMS.

    3. Supporting Compliance and Legal Requirements

    ISO 27001 requires organizations to consider legal, regulatory, and contractual obligations related to information security. Information Governance ensures that data handling practices comply with laws such as GDPR, HIPAA, and industry-specific regulations. It facilitates the creation of policies for data retention, disposal, and breach notification, which are essential for both compliance and certification.

    4. ISO 27001 Data Retention Policies and Data Disposition

    A critical aspect of Information Governance within ISO 27001 is the management of data retention and disposition. Clause A.8.3 of ISO 27001 (Annex A) specifically addresses the handling of information during its lifecycle, including secure disposal when no longer needed.

    • Data Retention Policies: These policies define how long different types of data should be retained based on legal, regulatory, and business requirements. Information Governance ensures that retention schedules are documented, justified, and consistently applied. Retaining data longer than necessary increases risk and cost, while premature deletion can lead to compliance violations or loss of valuable information.
    • Data Disposition: Secure and verifiable disposal of data is essential to prevent unauthorized access or data breaches. ISO 27001 requires organizations to implement controls that ensure data is destroyed in a manner that renders it unrecoverable. IG supports this by establishing procedures for data sanitization, physical destruction of media, and audit trails to verify compliance.

    Together, these practices help organizations reduce data sprawl, minimize exposure to risk, and demonstrate due diligence during audits. They also align with broader privacy principles such as data minimization and purpose limitation.

    5. Improving Data Quality and Integrity

    Poor data quality undermines the effectiveness of security controls and decision-making. IG promotes data accuracy, consistency, and completeness, which are critical for ISO 27001’s control objectives. For example, access control policies depend on reliable user and asset information. IG also supports audit readiness by ensuring that records are complete and traceable.

    6. Facilitating Documentation and Evidence

    ISO 27001 certification requires extensive documentation, including policies, procedures, risk assessments, and control implementation records. Information Governance provides the structure for managing documentation, version control, and retention schedules. It ensures that evidence required for audits is readily available and trustworthy.

    7. Driving Cultural Change and Awareness

    Information Governance fosters a culture of accountability and ethical information use. This cultural shift complements ISO 27001’s emphasis on leadership and awareness. Training programs, communication strategies, and performance metrics developed under IG can be leveraged to promote security awareness and employee engagement in the ISMS.

    8. Enabling Continuous Improvement

    Both IG and ISO 27001 advocate for continuous improvement. IG provides mechanisms for monitoring data usage, policy compliance, and emerging risks. These insights feed into ISO 27001’s performance evaluation and improvement processes, enabling organizations to adapt to changing threats and business needs.

    Practical Steps to Integrate IG into ISO 27001 Projects

    To maximize the benefits of Information Governance during ISO 27001 certification, organizations should consider the following steps:

    • Conduct an Information Inventory: Identify and classify all information assets, including structured and unstructured data, to understand what needs protection.
    • Define Governance Policies: Establish policies for data ownership, access, retention, and disposal aligned with legal and business requirements.
    • Engage Stakeholders: Involve cross-functional teams in governance and security planning to ensure comprehensive coverage and buy-in.
    • Implement Data Lifecycle Management: Manage information from creation to disposal, ensuring security controls are applied at each stage.
    • Monitor and Audit: Use IG tools to track data usage, policy compliance, and anomalies, feeding insights into the ISMS.
    • Align Metrics and KPIs: Develop performance indicators that reflect both governance and security objectives, supporting continuous improvement.

    Challenges and Considerations

    Integrating Information Governance into ISO 27001 projects is not without challenges. Organizations may face resistance to change, lack of resources, or fragmented data environments. Overcoming these hurdles requires strong leadership, clear communication, and a phased implementation approach. Leveraging frameworks such as COBIT, ITIL, and NIST can also support IG maturity and alignment with ISO 27001.

    Conclusion Information Governance is not merely a supporting function in ISO 27001 certification—it is a strategic enabler. By ensuring that information is well-managed, compliant, and aligned with business goals, IG lays the foundation for a robust and effective ISMS. Organizations that embrace IG as part of their ISO 27001 journey are better equipped to manage risks, demonstrate compliance, and build trust with stakeholders. In an era where information is both an asset and a liability, integrating governance and security is not just best practice—it is essential.

  • Information Governance in 2025: Board-Level Oversight of Cybersecurity, Artificial Intelligence, Privacy, and Risk Management


    Executive Summary

    As 2025 unfolds, Boards of Directors find themselves at the epicenter of an unprecedented convergence of digital innovation, regulatory challenge, and emergent risk. The stakes have never been higher, nor the expectations more complex. From the relentless pace of cyber threats powered by artificial intelligence, to the ethical and regulatory labyrinth of AI deployment, and the rapidly expanding universe of privacy compliance and information governance, boards are being called to exercise a level of vigilance and strategic leadership rarely demanded in prior decades.

    This report provides a deep analysis of the evolving best practices and emerging concerns in information governance that demand board-level attention. Structured around the four cornerstone themes – Cybersecurity, Artificial Intelligence, Privacy, and Risk Management – it explores not only the foundational responsibilities but also the nuanced ways in which modern board and committee oversight are evolving to match a volatile and hyper-connected environment.

    A central takeaway is that effective information governance is no longer a matter of compliance or technology alone. Rather, it is a strategic differentiator, a significant driver of competitive advantage, and a critical measure of ESG performance and reputational resilience. This report draws on extensive recent data and trends, recommendations from leading consultancy and governance organizations, and lessons from regulatory and litigation developments across multiple jurisdictions.


    Table: Board-Level Information Governance – Key Themes, Responsibilities, and Concerns (2025)

    ThemeKey Board ResponsibilitiesPrimary Concerns / Strategic PrioritiesCommon Oversight Structures / Notes
    CybersecurityStrategic risk oversight; CISO-board relationship; incident response and scenario planning; tech acumenEvolving threat landscape, regulatory compliance, third-party exposure, ESG impactTech/Risk committees, subcommittees, full board
    AIAI governance charters; ethics/bias oversight; scenario planning; board education and expertiseEthical risk, regulatory lag, ROI, innovation vs. controls, stakeholder trustAudit, risk, technology/AI, ESG, full board
    PrivacyPrivacy-by-design oversight; program quality; compliance posture; evidence of accountabilityRegulatory change, consumer trust, cross-border frameworks, litigation riskRisk, compliance, audit, tech/digital, full board
    Risk MgmtDynamic, multidisciplinary committee evolution; cross-functional reporting; crisis readinessNew interlinkages (cyber, privacy, ESG), third-party risk, emerging risksStanding risk/audit/tech committees, new hybrids

    Table Elaboration

    This summary table distills the sprawling imperatives facing boards in 2025, but the true import of each cell comes alive only in detailed analysis that follows. For instance, the primary concern of regulatory compliance referenced for all four areas is no longer static: regulations covering AI and privacy are in a dynamic state of flux in the US, Canada, UK, EU, and throughout Asia-Pacific, demanding boards to move beyond reactive compliance to proactive oversight and scenario-based risk planning. Similarly, competitive advantage through cybersecurity, long considered an IT ambition, is now a C-suite and board KPI strongly linked to trust, revenue, and ESG performance.


    Cybersecurity: From Technical Silo to Board-Level Strategic Differentiator

    The Shifting Landscape of Cyber Risk

    In 2025, the volume and sophistication of cyber threats continues to soar, fueled by nation-state actors, criminal syndicates, and the democratization of attacker tools via generative AI. Cyber risk is now routinely listed as the top threat by global board members, executives, and risk practitioners across industries – from financial services to manufacturing, energy, retail, and healthcare. The Allianz Risk Barometer again ranked cyber incidents as the leading global business risk for the year, with executive surveys echoing these concerns.

    Cyber risk is not static: new attack vectors – such as supply chain attacks, ransomware-as-a-service, deepfake-driven phishing, and attacks on AI models – demand that boards move far beyond mere technical literacy or delegated oversight. Digital resilience, encompassing both defense and recovery capabilities, is now fundamental to business continuity and valuation protection.

    Board – CISO Relationship and Dynamics

    The relationship between the Chief Information Security Officer (CISO) and the board has become a defining factor in long-term cyber resilience. Leading practice requires the CISO to be empowered as a business enabler, regularly briefing the board not just on technical controls and incident counts, but on business risk metrics, threat intelligence, scenario planning, and strategic investment priorities.

    Boards are also expected to bridge the typical technical-business gap, demanding CISOs present data and stories in terms directors understand: risk to operations, financial exposure, and compliance with board-defined risk appetite. Open, recurring engagement (rather than sporadic, compliance-driven reporting) is cited as a keystone of mature board–CISO partnerships.

    Boards are increasingly including cybersecurity skills in director selection matrices. Yet even boards without such expertise must ensure regular education sessions and access to external advisors to bridge technical knowledge gaps and oversee cybersecurity in strategic, enterprise terms.

    Cybersecurity as Competitive Advantage and ESG Consideration

    A central trend is the recognition of cybersecurity as a direct competitive advantage and ESG (Environmental, Social, Governance) pillar. Boards are expected to demonstrate how investments in cybersecurity not only protect against loss, but also enable trust, drive resilience, and support sustainable business models. Cyber resilience has become an explicit expectation for investors and rating agencies when evaluating a company’s long-term prospects.

    Third-Party and Supply Chain Risk

    Boards now oversee third-party and supply chain cyber risk as a central concern, given the cascade effect of high-profile breaches (e.g., SolarWinds, NotPetya) and regulatory focus on operational resilience. Effective oversight requires boards to confirm that management maintains a risk-ranked inventory of critical third-party relationships, enforces rigorous due diligence, and applies continuous monitoring and incident response playbooks that extend beyond the organization’s boundaries.

    Incident Response and Scenario Planning

    Crisis readiness is a non-negotiable board obligation. Modern boards demand that incident response plans are documented, tested through tabletop simulations, and mapped to the organization’s risk appetite. Active scenario planning – envisioning not only likely attacks but unthinkable “black swan” events – enables directors to test assumptions, clarify roles, and build organizational muscle for rapid, values-aligned response in a crisis.

    Summary of Board-Level Cybersecurity Best Practices

    • Regular, direct engagement between board and CISO, with mutual understanding of business risk
    • Technology and cyber expertise included in board composition, or accessible via advisors/committees
    • Cybersecurity considered in ESG frameworks, reporting, and board KPIs
    • Supply chain and third-party cyber risk integrated into enterprise TPRM programs, with board visibility
    • Incident response and crisis scenario planning as standing board agenda items, with periodic simulations
    • Cyber risk issues integrated into strategy discussions – not sidelined as technical/IT topics
    • Independent external maturity assessments and regular benchmarking against industry peers
    • Board oversight of breach notification, ransom response, and public/stakeholder communications.

    Artificial Intelligence (AI): Board Challenges and Strategic Governance in the Age of Intelligence

    AI Oversight Moves Center Stage

    AI governance has emerged in 2025 not just as a compliance challenge but as a board-level strategic dilemma. The proliferation and mainstreaming of generative AI, machine learning, and automation tools across nearly every business function have confronted directors with the need to govern in a domain characterized by rapid innovation, regulatory uncertainty, and substantial risk of operational, legal, and reputational harm.

    Recent survey data indicate remarkable growth in board commitment: as of 2024, over 31% of S&P 500 companies disclosed some level of board or committee oversight of AI, and 20% had at least one director with AI expertise (up from 11% in 2022). Disclosure and committee charters covering AI are trending upwards across sectors, with particular growth in the Information Technology, Communications, and Consumer Discretionary industries.

    AI Governance: Committee Structures and Reporting Lines

    Boards employ a variety of oversight models. Best practice is sector- and organization-specific but often involves expanding the remit of the risk, audit, or technology committees, or establishing new AI/technology committees to clarify accountability for AI risk, ethics, and strategy.

    Notably, shareholder activism in 2024 – 2025 pushed several large companies (banks, retailers, technology giants) to amend committee charters for explicit AI oversight and to improve disclosure around board-level AI governance. There is a marked trend toward assigning strategic AI oversight responsibilities at the full board level – indicating increasing recognition of AI’s pervasive impact beyond IT or compliance domains.

    AI Ethics Boards and Cross-Functional Governance

    Explicit AI Ethics and Review Boards remain relatively rare (about 2–3% adoption in S&P 500) but are increasing, especially in industries with direct AI R&D or significant customer-facing automation. These entities report, variably, to the board, the risk committee, or the CEO and serve as multi-disciplinary risk/ethics panels – a practice recommended by both global regulators (OECD, UNESCO, NIST, ISO) and leading governance consultancies.

    Key responsibilities for such entities include reviewing potential model bias, explainability, safety, privacy compliance, and the implementation of ethical guidelines – often in response to regulatory frameworks or sector standards such as the EU AI Act, AI Risk Management Frameworks, and jurisdictional voluntary codes of conduct.

    AI Risk, Bias, and the Board’s Fiduciary Duties

    AI’s sheer velocity and complexity dramatically increase the risk that unmonitored automation could escalate small model errors into systemic issues (ranging from discriminatory outcomes to operational, legal, or IP exposure) before human controls intervene. Leading boards are therefore building scenario analysis, real-time monitoring, and bias/ethical auditing into their oversight scope, often through the development of internal AI Centers of Excellence reporting directly to the board’s risk committee rather than to IT or business units.

    Board Education and Expertise in AI

    Most directors still report only foundational knowledge of AI risks and governance models, with only a minority having hands-on experience or technical backgrounds – a consistent finding across US, Canadian, and European surveys. Training and board refreshment with AI-savvy members or advisor participation in targeted committee meetings are cited as effective strategies to raise overall board fluency.

    Shareholder Proposals and External Pressure

    AI-focused shareholder proposals grew more than fourfold in 2024, spanning requests for impact assessments, ethical use commitments, transparency on data sourcing, and amendments to board committee charters. These proposals are appearing in sectors well outside “Big Tech”: finance, retail, telecoms, media, consumer services, and even industrials and oil – signaling that investors see AI preparedness and governance as key to long-term corporate value.

    AI Regulation: Fast-Evolving, Broad in Scope

    Boards are advised to track not only sectoral and jurisdiction-specific regulations (e.g., EU AI Act, Canadian AIDA, US state initiatives, industry codes) but also voluntary global standards (OECD Principles, NIST AI RMF, UNESCO, ISO/IEC 42001, IEEE 7000) that shape responsible AI and can be used to demonstrate “reasonable care” in oversight.

    Canadian and US boards, for instance, face a patchwork of privacy and AI mandates, with provinces like Québec already enforcing disclosure and transparency on AI-based decision-making and hiring, and national regulators encouraging voluntary adoption of governance frameworks, pending federal law finalization.

    AI Governance as Innovation Catalyst and Reputational Shield

    Despite the risk focus, directors and shareholders recognize AI’s potential to drive value transformation, operational efficiency, and market differentiation. Best-in-class board AI governance supports, rather than stifles, responsible innovation by setting clear strategic objectives, establishing agile and transparent oversight, and nurturing an experimental, evidence-based learning culture for rapid adaptation to AI-driven disruption.


    Privacy: From Compliance Backwater to Boardroom Priority

    Explosion of Privacy Regulation and Litigation

    Privacy is now firmly a board-level concern – driven by the explosive growth of global regulations (GDPR, CCPA, CPRA, Law 25 in Québec, and others), the proliferation of class-action lawsuits and shareholder litigation following breaches, and steep increases in statutory penalties for non-compliance. In Canada, the emergence of privacy class actions (often following data breaches), the introduction of administrative monetary penalties, and the increasing focus of proxy advisors and ESG rating agencies mean boards are directly accountable for privacy program effectiveness.

    Similar dynamics are evident globally, with regulatory scrutiny reaching new heights -from the EU’s record fines to US SEC enforcement actions, and new requirements for data minimization, transparency, and expanded individual rights.

    Board’s Duty of Oversight and Accountability

    Boards are required as part of their fiduciary and statutory duties to exercise active, documented oversight of data privacy – ensuring the company understands the purposes and methods of its data collection, is transparent with stakeholders, conducts periodic risk and program reviews, and maintains a robust compliance posture with all relevant laws.

    A privacy-by-design approach – embedding privacy safeguards into systems and processes at the outset rather than retrofitting them in response to incidents – is cited as board-level leading practice, and demonstrably strengthens consumer trust and compliance outcomes.

    Privacy by Design and Board Responsibility

    Successful integration of privacy into product, process, and business model design requires direct top-management and board support. Case studies across sectors find that proactive leadership, cross-functional risk and assessment processes, and executive education foster a culture in which privacy is a strategic asset, rather than a legal burden.

    Boards must ensure that privacy programs are adequately funded, systematically reviewed, and that the company can furnish evidence of compliance to regulators on demand. This includes comprehensive documentation, mapping of personal data flows, PIA (Privacy Impact Assessment) procedures, and quarterly targeted staff training – especially for teams handling sensitive information or managing third-party contracts.

    Privacy Risk Governance and Third-Party Exposure

    With data-driven business models reliant on a vast ecosystem of vendors and cloud partners, boards face growing exposure from privacy failures in third parties. Effective board oversight now includes vendor data privacy due diligence, monitoring, and clear contract compliance requirements aligned with applicable frameworks (e.g., NIST, ISO 27001).

    Emerging Privacy Priorities for Boards

    • Defining and regularly reviewing the company’s purpose for personal information collection and retention
    • Mandating and overseeing privacy-by-design principles, and documenting evidence of program maturity
    • Preparing for class action and derivative lawsuits linked to breach of privacy oversight duty
    • Integrating privacy into board-level crisis and reputation management, with ready response plans for data incidents
    • Supporting CPO/DPO roles and regular reporting to the board on privacy metrics and program status
    • Ensuring strategic alignment between business growth initiatives and privacy compliance impacts.

    Risk Management: The Evolution of Board Risk Committees and Oversight Mechanisms

    The New Face of Board-Level Risk Committees

    In 2025, the evolution of risk management at board level has become as dramatic as the changes in the risk landscape itself. The traditional, quarterly risk review has proven utterly insufficient – modern committees must now operate as dynamic, multidisciplinary teams that actively monitor, anticipate, and lead rapid organizational responses to digital, regulatory, ESG, and geopolitical risks.

    Charters are being updated to codify information governance, cybersecurity, AI ethics, and data privacy oversight as standing committee responsibilities. Sector exemplars include adding digital, privacy, or AI specialists to risk and audit committees, and establishing cross-functional links with internal audit, legal, compliance, technology, and ESG teams.

    Board Composition and Digital/Tech Acumen

    Digital acumen in the boardroom is no longer a “nice to have.” There is clear evidence that boards with members experienced in data science, cybersecurity, privacy law, and AI governance are better equipped to challenge management, ask the right questions, and maintain actionable risk registers that reflect real operational exposure.

    Further, the need to bridge expertise gaps is driving the formation of technology/innovation committees, the routine use of external experts, and targeted director education programs focused on technology and risk developments.

    Dynamic Crisis and Incident Scenario Planning

    Agile risk oversight requires readiness not only for likely risks but also for “unthinkable” black swan events – high-impact, low-likelihood scenarios such as systemic supply chain attacks, major regulatory shocks, or catastrophic AI failures. Boards are increasingly engaging in scenario planning, crisis simulations, and after-action reviews to pressure-test assumptions and foster organizational learning at all levels.

    ESG and Information Governance Integration

    Boards are aligning risk management with ESG priorities, including the development of “double materiality” frameworks that consider both financial and broader societal impacts of cyber, privacy, and AI risks. Disclosure of material IT, cyber, and AI risks in ESG reporting is quickly becoming a global investor expectation and board-level requirement. Supply chain, third-party, and data privacy exposures must be incorporated into both sustainability and financial risk reporting frameworks.

    Standing Committee Charters and Best Practices

    Charters for risk, audit, and/or technology committees should be reviewed and updated annually, specifically to clarify oversight roles for cyber, AI, privacy, third-party, and ESG-related risks. Boards should also ensure sufficient skills diversity within committees and establish clear escalation, reporting, and review cadence for major incidents and audits.


    Key Cross-Theme Trends and Strategic Priorities

    1. Board Education and Digital Fluency

    The pace of technology-driven change means that continuous director education is an existential necessity. Boards must regularly schedule briefings on cybersecurity, AI risk, privacy law, and best-practice governance frameworks, leveraging both internal and external subject matter expertise.

    2. Crisis Scenario Planning and Response Integration

    Crisis management is now everybody’s business, starting in the boardroom. The board’s defined role is strategic oversight, supporting management but never overstepping into operational boundaries in a crisis. This requires regular crisis simulations, not just tabletop exercises, along with clear post-mortem analysis and integration of lessons learned into the ongoing risk agenda.

    3. Proactive Engagement with Shareholders and Regulatory Developments

    Directors should anticipate ongoing activism and regulatory scrutiny around all facets of information governance. Proactive disclosure, transparent principles for AI and data management, and regular engagement with investors on ESG, privacy, and risk management practices are now seen as foundational to reputational and capital resilience.

    4. Third-Party/Supply Chain, Cross-Functional, and Multijurisdictional Risks

    Third-party risk is not only a cyber issue – it is integral to privacy, AI, and ESG frameworks. Boards must oversee holistic TPRM programs, cross-functional reporting, and real-time risk dashboards that reflect the interconnectedness and global reach of today’s digital ecosystem.

    5. Embedding Privacy and Security by Design

    Privacy by Design and Security by Design are no longer slogans but board-level imperatives, required by regulation and expected by customers and investors. Boards must oversee the proactive integration of these principles into product development, operations, M&A due diligence, and supply chain practices.


    Conclusion: The Board’s Imperative in Information Governance for 2025 and Beyond

    The expanding scope, scale, and intensity of information governance challenges require a new mindset at the board level. Directors must operate as strategic navigators, charting a course between innovation and risk, compliance and value creation, adaptation and assurance.

    Best-in-class boards will distinguish themselves not through mere compliance, but by embedding information governance into the very fabric of their organization’s culture, strategy, and stakeholder relationships. This means fostering mature CISO-board partnerships, demanding robust cross-functional risk governance, committing to director education, and setting a visible example that positions the company as a trusted, resilient, and ethical participant in the digital economy.

    The age of digital transformation, AI ascendency, and global data flows will only accelerate. Boards of Directors – in their composition, committee structures, charters, and everyday practices – will define their organizations’ capacity to not only weather the coming waves of risk and regulation, but to seize the unprecedented opportunities they bring.


  • Minimum Viable Governance: A Lean Blueprint for Integrated Oversight in the Age of AI and Data

    By Robert Gerbrandt

    Why Governance Needs a Reboot

    Governance has long been the backbone of organizational integrity. Yet, as digital transformation accelerates, legacy models—often siloed and process-heavy—struggle to keep pace. The rise of AI introduces new ethical and operational risks, while data proliferation demands tighter controls and clearer ownership. Meanwhile, information governance remains critical for privacy, security, and regulatory compliance. The Minimum Viable Governance (MVG) framework responds to this complexity with a “minimum viable” philosophy: deliver essential governance outcomes with the least possible burden. It’s not about cutting corners—it’s about cutting clutter.

    Intersection and Commonality between MVG, MVD, MVP, and MVE

    The concept of “minimum viable” has been widely adopted across various domains, each with its unique focus but sharing a common philosophy of delivering essential value with minimal resources. This section explores the intersection and commonality between Minimum Viable Governance (MVG), Minimal Viable Design (MVD), Minimal Viable Product (MVP), and Minimal Viable Experience (MVE).

    Minimum Viable Governance (MVG): MVG aims to deliver essential governance outcomes with the least possible burden. It focuses on integrating the foundational elements of Information Governance, Data Governance, and AI Governance into a cohesive, lightweight, and scalable framework.

    Minimal Viable Design (MVD): MVD emphasizes creating the simplest design that meets the core needs of users. It prioritizes functionality and user experience while avoiding unnecessary complexity. The goal is to deliver a design that is both effective and efficient, ensuring that users can achieve their objectives with minimal friction.

    Minimal Viable Product (MVP): MVP is a development strategy that focuses on creating a product with just enough features to satisfy early adopters. The primary objective is to gather feedback and validate the product idea before investing significant resources. MVP allows for iterative improvements based on user feedback, ensuring that the final product meets market demands.

    Minimal Viable Experience (MVE): MVE extends the concept of MVP by emphasizing the overall user experience. It aims to deliver a complete and satisfying experience with the minimum necessary features. MVE ensures that users not only find the product functional but also enjoyable and engaging.

    Commonality and Intersection:

    1. Lean Approach: All four concepts adopt a lean approach, focusing on delivering essential value with minimal resources. They prioritize efficiency and effectiveness, ensuring that the core objectives are met without unnecessary complexity.

    2. User-Centric: Each concept places a strong emphasis on the user. Whether it’s governance, design, product development, or experience, the primary goal is to meet the needs and expectations of the user.

    3. Iterative Improvement: The philosophy of iterative improvement is central to all four concepts. By starting with a minimal viable version, they allow for continuous feedback and refinement, ensuring that the final outcome is aligned with user needs and market demands.

    4. Scalability: Each concept is designed to be scalable. They provide a foundation that can be expanded and enhanced over time, allowing organizations to grow and adapt as needed.

    In summary, MVG, MVD, MVP, and MVE share a common philosophy of delivering essential value with minimal resources. They emphasize a lean, user-centric approach that allows for iterative improvement and scalability. By adopting these principles, organizations can achieve their goals more efficiently and effectively.

    The MVG Framework: Six Core Components

    MVG integrates the foundational elements of Information Governance, Data Governance, and AI Governance into six actionable components. Each is designed to be lightweight, scalable, and easy to implement.

    1. Lightweight Governance Committee

    Rather than multiple governance bodies, MVG proposes a single, cross-functional committee with representatives from each domain. This team meets quarterly and during incident reviews, owning and updating core policy documents. The goal: maintain oversight without over-administration.

    2. Role Assignment & Accountability

    MVG emphasizes clear ownership. Each business unit assigns data stewards for information systems, datasets, and AI models. These stewards log decisions in simple formats—spreadsheets suffice—ensuring traceability and accountability.

    3. Risk and Compliance Registry

    A central, living document tracks key assets, associated risks, mitigation strategies, and regulatory requirements. Updates occur only when assets change significantly or incidents arise. This registry becomes the heartbeat of governance visibility.

    4. Policy Checklists

    Forget 50-page manuals. MVG uses three one-page checklists—one each for information, data, and AI. These are reviewed during onboarding and major changes, ensuring consistent policy application without overwhelming staff.

    5. Incident & Issue Response Protocol

    MVG simplifies incident management with a basic form capturing what happened, which asset was affected, actions taken, and ownership. Significant events are reviewed by the committee, and lessons learned are logged for organizational growth.

    6. Lightweight Audit & Training

    Annual self-assessments against the checklists and risk registry replace exhaustive audits. One mandatory awareness session per year ensures staff remain informed and engaged.

    Governance Roles: Lean but Accountable

    MVG’s role structure is designed for clarity and agility. Key roles include:

    ·        Governance Committee: Owns policies, reviews incidents, and ensures compliance.

    ·        CIO (Optional): Provides strategic oversight and escalates issues.

    ·        Domain Stewards: Manage quality, documentation, and ethical use across data, AI, and information.

    ·        Owners: Assign business value, approve changes, and manage lifecycle rules.

    ·        Custodians/Technical Leads: Implement controls and manage infrastructure.

    ·        Compliance Champions: Advise on regulations and support policy development.

    ·        Business Users: Use assets responsibly and participate in training.

    This structure ensures direct accountability while remaining lean enough for rapid deployment.

    Measuring What Matters: MVG KPIs

    Governance without measurement is guesswork. MVG introduces eight practical KPIs to track performance across domains:

    1.      Training Completion: Percentage of staff completing annual governance training.

    2.      Ownership Coverage: Proportion of assets with assigned stewards.

    3.      Checklist Review Rate: Percentage of changes reviewed against policy checklists.

    4.      Governance Incident Rate: Number and severity of reported breaches or risks.

    5.      Audit Readiness: Share of assets with up-to-date documentation.

    6.      Explainability/Fairness: Coverage of ethical assessments for high-impact AI models.

    7.      Value Realization: Governance initiatives delivering measurable business outcomes.

    8.      Time-to-Incident Resolution: Median time from issue detection to resolution.

    These KPIs are outcome-oriented, enabling organizations to assess governance effectiveness without excessive reporting.

    Scaling MVG: Maturity-Based Targets

    MVG is designed to grow with the organization. KPI targets are calibrated across three maturity levels:

    –        Initial/Baseline: Focus on establishing accountability (e.g., 60–70% training completion).

    –        Developing: Push for broader coverage and process closure (e.g., 80–90% asset stewardship).

    –        Advanced/Optimized: Achieve full coverage and emphasize business value (e.g., 100% checklist reviews, <2-week incident resolution).

    Thresholds use red/yellow/green indicators to signal performance, making governance health easy to monitor and communicate.

    Why MVG Works

    MVG succeeds because it aligns governance with organizational agility. It avoids the trap of over-engineering, instead focusing on:

    –        Speed: Rapid deployment with minimal setup.

    –        Clarity: Defined roles and responsibilities.

    –        Flexibility: Scalable across teams and maturity levels.

    –        Value: Direct linkage to business outcomes.

    For startups, MVG offers a governance “starter kit.” For enterprises, it provides a way to streamline and unify fragmented oversight efforts.

    Putting MVG into Practice

    To implement MVG, organizations should:

    1.      Form the Governance Committee: Identify representatives and schedule quarterly meetings.

    2.      Assign Stewards: Map key assets and assign owners.

    3.      Create the Risk Registry: Use a shared document to track risks and controls.

    4.      Develop Checklists: Draft one-pagers for each domain.

    5.      Launch Training: Schedule annual sessions and track completion.

    6.      Monitor KPIs: Set initial targets and review quarterly.

    Tools like spreadsheets, shared drives, and basic forms are sufficient. The emphasis is on function over form.

    Conclusion: Governance for the Real World

    In an era of accelerating innovation and increasing scrutiny, governance must evolve. The MVG framework offers a pragmatic path forward—one that respects the complexity of modern organizations while embracing the simplicity needed for action. By integrating the essentials of information, data, and AI governance into a cohesive, minimum viable model, MVG empowers organizations to govern smarter, not harder.

    Like this? Show your support at buymeacoffee.com/RobGerbrandt