#GRC Today I led a session focused on rolling out a new Standard Operating Procedure (SOP) for the use of artificial intelligence tools, including generative AI, within our organization. AI tools offer powerful benefits (faster analysis, automation, improved communication) but without guidance, they can introduce major risks: • Data leakage • IP exposure • Regulatory violations • Inconsistent use across teams That’s why a well-crafted SOP isn’t just nice to have .. it’s a requirement for responsible AI governance. I walked the team through the objective: 1. To outline clear expectations and minimum requirements for engaging with AI tools in a way that protects company data, respects ethical standards, and aligns with core values. We highlighted the dual nature of AI (high value, high risk) and positioned the SOP as a safeguard, not a blocker. 2. Next, I made sure everyone understood who this applied to: • All employees • Contractors • Anyone using or integrating AI into business operations We talked through scenarios like writing reports, drafting code, automating tasks, or summarizing client info using AI. 3. We broke down risk into: • Operational Risk: Using AI tools that aren’t vendor-reviewed • Compliance Risk: Feeding regulated or confidential data into public tools • Reputational Risk: Inaccurate or biased outputs tied to brand use • Legal Risk: Violation of third-party data handling agreements 4. We outlined what “responsible use” looks like: • No uploading of confidential data into public-facing AI tools • Clear tagging of AI-generated content in internal deliverables • Vendor-approved tools only • Security reviews for integrations • Mandatory acknowledgment of the SOP 5. I closed the session with action items: • Review and digitally sign the SOP • Identify all current AI use cases on your team • Flag any tools or workflows that may require deeper evaluation Don’t assume everyone understands the risk just because they use the tools. Frame your SOP rollout as an enablement strategy, not a restriction. Show them how strong governance creates freedom to innovate .. safely. Want a copy of the AI Tool Risk Matrix or the Responsible Use Checklist? Drop a comment below.
Best Practices for Data Governance and Role Alignment
Explore top LinkedIn content from expert professionals.
Summary
Data governance and role alignment are critical for ensuring that organizations manage data responsibly and securely. This involves creating clear rules for data usage, defining roles for accountability, and establishing processes that minimize risks while maximizing the value of data and AI innovations.
- Define clear roles and responsibilities: Assign specific data management tasks and accountability to individuals or teams, ensuring everyone understands their role in maintaining data accuracy and security.
- Implement standardized processes: Develop and document procedures for data management, AI usage, and compliance to ensure consistency, transparency, and ethical practices across the organization.
- Foster collaboration between teams: Encourage communication and coordination between data science, IT, and governance teams to bridge gaps in processes and align on organizational goals.
-
-
How will CDO's bridge the divide that exists between data science teams and more 'traditional' data management approaches? In a post two days ago I highlighted a significant problem for companies which have a data science function. The problem in these companies is there are often two distinct processes to ensure data is accurate, consistent, and trustworthy between the worlds of data science and traditional data management - and insufficient mechanisms exist to ensure both approaches are consistent. This divide creates significant risk for the CDO organization, particularly in the area of AI data governance. To bridge this divide, I suggest CDO's focus on: 1️⃣ Organizational alignment There must be org alignment between DS functions and more traditional data management and governance functions, at every level of the org where these functions reside. For bigger companies, creating some form of a DS/DM 'center of excellence' to ensure this alignment is worth exploring. 2️⃣ Collaboration is key CDOs must create pathways to allow all data professionals to interact more, and to share insights on existing processes, systems, and datasets. I suspect many data scientists are completely unaware of the work done in other areas of the data and analytics function. 3️⃣ Incentives for re-use CDO's must consider implementing incentives for data scientists to re-use data which fits their requirements. Existing patterns of building customized solutions for each project must be challenged. 4️⃣ Common Data Platform CDOs should re-evaluate their technical strategies to ensure they are moving towards the implementation of a common platform which can support all of data science, BI/Analytics, and operational workloads. 5️⃣ Data Catalogs As a part of their technology strategy, CDOs should consider solutions which allow for all data to be cataloged, with full visibility on data lifecycle/lineage and any existing governance policies affecting all data. 6️⃣ Full visibility on all data transformations Data teams must start to capture insights on ALL business rules, applied across any data management or governance tool, which in any way transforms source/raw data. For example, the match rules used by an MDM system to enable entity resolution in the creation of a master record must be fully documented and available to data scientists to interrogate. 7️⃣ Data Product Management The best way to ensure CDOs are building data products that deliver customer value, while ensuring the processes used to build those products are scalable, cost effective, and efficient, is to hire a product manager. Another way to think of this role would be the CDO org 'transformation' officer. Focusing on these 7 things would be a great start to breaking these silos. As always, your suggestions on how to solve this problem are welcome - so please comment below! #datascience #cdo #datagovernance
-
𝗗𝗮𝘁𝗮 𝗚𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 + 𝗔𝗜: 𝟰 𝗤𝘂𝗲𝘀𝘁𝗶𝗼𝗻𝘀 𝗘𝘃𝗲𝗿𝘆 𝗖𝗗𝗢 𝗠𝘂𝘀𝘁 𝗔𝘀𝗸 𝘁𝗼 𝗦𝘁𝗮𝘆 𝗼𝗻 𝘁𝗵𝗲 𝗥𝗶𝗴𝗵𝘁 𝗦𝗶𝗱𝗲. They are 2 schools of thought, Don’t use AI until we understand all the ethical implications and can control it OR Data governance is passe, AI is the magic that can fix your data and make it great If you’ve been reading my newsletter, you know by now that I don’t believe in magic. Neither do I believe that humans as species are capable of standing still when there is a whole new frontier to discover, make money on and deliver impact. To decide whether data governance is still relevant in the age of AI, let’s look at 4 fundamental questions about AI implementations: 𝗔𝗿𝗲 𝘁𝗵𝗲 𝗿𝗲𝘀𝘂𝗹𝘁𝘀 𝗰𝗼𝗿𝗿𝗲𝗰𝘁? While model governance and testing play a big role in being able to answer this question, data accuracy is a crucial pillar. Data governance enables data accuracy. 𝗖𝗮𝗻 𝘁𝗵𝗲 𝗺𝗼𝗱𝗲𝗹 𝗲𝘅𝗽𝗹𝗮𝗶𝗻 𝗵𝗼𝘄 𝗶𝘁 𝗰𝗮𝗺𝗲 𝘂𝗽 𝘄𝗶𝘁𝗵 𝘁𝗵𝗶𝘀 𝗮𝗻𝘀𝘄𝗲𝗿? While model explainability is still evolving, one major component is knowing what data was used at both its training and execution. Data observability - a new capability that’s the evolution of both data quality and data lineage - is key to answer that question. 𝗜𝘀 𝗶𝘁 𝘂𝗻𝗯𝗶𝗮𝘀𝗲𝗱? Addressing and mitigating bias is both challenging and important, especially in applications of AI that can have impact on people (e.g., hiring, loan approvals, claims adjudication). Two data governance disciplines: data observability and metadata management, are key to discovering the biases that exist in data. 𝗜𝘀 𝗶𝘁 𝗿𝗲𝘀𝗽𝗼𝗻𝘀𝗶𝗯𝗹𝗲? Among many considerations needed to be able to answer this question, data rights stand out as yet another sign that data governance’s importance is increasing. Data rights encompass both data privacy considerations, for example, did the customer agree to the use of their data for marketing purposes and data source management, for example, are we ensuring we are not using copyrighted material or irresponsible content. Metadata management combined with data observability are key to managing both data rights and data source context. Continually evolving our data and AI governance is how we ensure the benefits we derive from its use aren’t negated by outsized risks. *** 500+ data executives are subscribed to the 'Leading with Data' newsletter. Every Friday morning, I'll email you 1 actionable tip to accelerate the business potential of your data & make it an organisational priority. Would you like to subscribe? Click on ‘View My Blog’ right below my name at the start of this post. Qu
-
I spoke on two panels yesterday, offering a privacy practitioner's perspective on AI governance and third-party risk management. 3 takeaways: 1️⃣ Data governance is multidisciplinary. There are lessons to be learned from all walks of life. Our panels wove together stories and takeaways from H. Bryan Cunningham (policy, strategy, podcast on obscure history), Mike Grant (cybersecurity insurance, CPA), Alyssa Coon (legal, privacy operations), Mark Kasperowicz (history, humor, and curiosity), Steve Kerler (pragmatic leadership, change management), and myself (compliance, regulatory/enforcement analysis, privacy operations). Look for the universal threads in your own experience. Chances are, there's a way for them to apply across data governance and data privacy as well. 2️⃣ Create resilience through foundations. Both the panels I participated in came back to core principles. When they're in place, business leaders can make decisions with full awareness of how they fit in the policies. > Know Your Data/AI/Vendors Where it goes, what you're allowed to do with it, and what you're actually doing with it. > North Star Values Decisions should align with company values. Leadership, committees, stakeholders, and operators should all align on what this looks like in practice. This includes risk appetite. > Risk Assessment Review the legislative, regulatory, cybersecurity, and market landscape. Assess against your data, your values, your risk appetite. What changes do you need to make to get yourself aligned? > Iterate. (These panels were sponsored by Privageo, where the Discover-Build-Manage framework maps to these ideas. Align on priorities; Bridge the gap; review and Course-Correct or Carry On.) 3️⃣ AI isn't going anywhere. Bryan Cunningham noted that forbidding staff from using AI tools won't work. Perhaps, he suggests, you can create a sandbox environment for exploration, without risk to data. For our part, Privageo recommends structuring your guidance to employees in three buckets - but the line in the sand between the buckets will vary by organization! > No permission required: Low-risk activities that do not involve trade secrets, company data, personal information, or other risk? Ok. e.g. asking a genAI tool to assist with drafting an email. > Strictly forbidden: High-risk activities where company control and audit trails must be maintained. e.g. anything involving sensitive personal information or company schematics > "Navigating with Care": Where most real-world AI applications reside, the gray area between those clear-cut options. Go back to takeaway 2, get your foundations in place, and bring together stakeholders to assess how your values, data, risk appetite, and business needs interact. It's critical to define your boundaries. --- It was a pleasure to discuss the above with the sharp minds at ELN's Cybersecurity Retreat! Thank you to everyone on the panels for the thought-provoking discussions.
-
We’re at a crossroads. AI is accelerating, but our ability to govern data responsibly isn’t keeping pace. The next big leap isn’t more AI, it’s TRUST - by design. Every week, I speak with organizations eager to “lead with AI,” convinced that more features or bigger models are the solution. But here’s the inconvenient truth: without strong foundations for data governance, all the AI in the world is just adding complexity, risk, confusion and tech debt. Real innovation doesn’t start with algorithms. It starts with clarity. It starts with accountability: • Do you know where your data lives, at every stage of its lifecycle? • Are roles and responsibilities clear, from leadership to frontline teams? • Are your processes standardized, repeatable, and provable? • When you deploy AI, can you explain its decisions, to your users, your partners, and regulators? • Are your third parties held to the same high standards as your internal teams? • Is compliance an afterthought, or is it embedded by design? This is the moment for Responsible Data Governance (RDG™), the standard created by XRSI to transform TRUST from a buzzword into an operational reality. RDG™ isn’t about compliance checklists or marketing theater. It’s a blueprint for leadership, resilience, and authentic accountability in a world defined by rapid change. Here’s my challenge to every leader: Before you chase the next big AI promise, ask: Are your data practices worthy of trust? Are you ready to certify it? not just say it? If your organization: 1. Operates #XR, #spatial computing or #digital #twins that interact with real-world user behavior; 2. Collects, generates, and/or processes personal, sensitive, or inferred data; 3. Deploys #AI / ML algorithms in decision-making, personalization, automation, or surveillance contexts; 4. If you want your customers, partners, and regulators to believe in your AI (not just take your word for it), now is the time to act. TRUST is the new competitive advantage. Let’s build it together. Message me to explore how RDG™ certification can help your organization cut through the noise and lead with confidence. Or visit www.xrsi.org/rdg to start your journey. The future of AI belongs to those who make trust a core capability - not just a slogan. Liam Coffey Ally Kaiser Radia Funna Asha Easton Amy Peck Alex Cahana, MD David W. Sime Paul Jones - MBA CMgr FCMI April Boyd-Noronha 🔐 SSAP, MBA 🥽 Luis Bravo Martins Monika Manolova, PhD Julia Scott Jaime Schwarz Joe Morgan, MD Divya Chander