A lot of companies think they’re “safe” from AI compliance risks simply because they haven’t formally adopted AI. But that’s a dangerous assumption—and it’s already backfiring for some organizations. Here’s what’s really happening— Employees are quietly using ChatGPT, Claude, Gemini, and other tools to summarize customer data, rewrite client emails, or draft policy documents. In some cases, they’re even uploading sensitive files or legal content to get a “better” response. The organization may not have visibility into any of it. This is what’s called Shadow AI—unauthorized or unsanctioned use of AI tools by employees. Now, here’s what a #GRC professional needs to do about it: 1. Start with Discovery: Use internal surveys, browser activity logs (if available), or device-level monitoring to identify which teams are already using AI tools and for what purposes. No blame—just visibility. 2. Risk Categorization: Document the type of data being processed and match it to its sensitivity. Are they uploading PII? Legal content? Proprietary product info? If so, flag it. 3. Policy Design or Update: Draft an internal AI Use Policy. It doesn’t need to ban tools outright—but it should define: • What tools are approved • What types of data are prohibited • What employees need to do to request new tools 4. Communicate and Train: Employees need to understand not just what they can’t do, but why. Use plain examples to show how uploading files to a public AI model could violate privacy law, leak IP, or introduce bias into decisions. 5. Monitor and Adjust: Once you’ve rolled out your first version of the policy, revisit it every 60–90 days. This field is moving fast—and so should your governance. This can happen anywhere: in education, real estate, logistics, fintech, or nonprofits. You don’t need a team of AI engineers to start building good governance. You just need visibility, structure, and accountability. Let’s stop thinking of AI risk as something “only tech companies” deal with. Shadow AI is already in your workplace—you just haven’t looked yet.
How to Prepare for AI Governance Changes
Explore top LinkedIn content from expert professionals.
Summary
With AI technologies rapidly evolving, organizations and governments must adapt to ongoing shifts in AI governance—rules and strategies guiding the responsible development and use of AI. Preparing for these changes involves understanding both compliance needs and the ethical, operational, and data management challenges associated with artificial intelligence.
- Audit current AI practices: Identify how and where AI tools are being used within your organization, assess risks such as data sensitivity and compliance gaps, and ensure all AI applications have clear accountability and documentation.
- Create clear governance frameworks: Develop organizational policies outlining approved AI tools, data usage limits, and procedures for requesting or evaluating new technologies. Make policies adaptable to keep up with evolving regulations.
- Educate and involve your team: Provide tailored AI literacy training for employees and establish cross-functional teams to oversee AI governance. Encourage transparency in AI usage and emphasize shared responsibility across all departments.
-
-
"The rapid evolution and swift adoption of generative AI have prompted governments to keep pace and prepare for future developments and impacts. Policy-makers are considering how generative artificial intelligence (AI) can be used in the public interest, balancing economic and social opportunities while mitigating risks. To achieve this purpose, this paper provides a comprehensive 360° governance framework: 1 Harness past: Use existing regulations and address gaps introduced by generative AI. The effectiveness of national strategies for promoting AI innovation and responsible practices depends on the timely assessment of the regulatory levers at hand to tackle the unique challenges and opportunities presented by the technology. Prior to developing new AI regulations or authorities, governments should: – Assess existing regulations for tensions and gaps caused by generative AI, coordinating across the policy objectives of multiple regulatory instruments – Clarify responsibility allocation through legal and regulatory precedents and supplement efforts where gaps are found – Evaluate existing regulatory authorities for capacity to tackle generative AI challenges and consider the trade-offs for centralizing authority within a dedicated agency 2 Build present: Cultivate whole-of-society generative AI governance and cross-sector knowledge sharing. Government policy-makers and regulators cannot independently ensure the resilient governance of generative AI – additional stakeholder groups from across industry, civil society and academia are also needed. Governments must use a broader set of governance tools, beyond regulations, to: – Address challenges unique to each stakeholder group in contributing to whole-of-society generative AI governance – Cultivate multistakeholder knowledge-sharing and encourage interdisciplinary thinking – Lead by example by adopting responsible AI practices 3 Plan future: Incorporate preparedness and agility into generative AI governance and cultivate international cooperation. Generative AI’s capabilities are evolving alongside other technologies. Governments need to develop national strategies that consider limited resources and global uncertainties, and that feature foresight mechanisms to adapt policies and regulations to technological advancements and emerging risks. This necessitates the following key actions: – Targeted investments for AI upskilling and recruitment in government – Horizon scanning of generative AI innovation and foreseeable risks associated with emerging capabilities, convergence with other technologies and interactions with humans – Foresight exercises to prepare for multiple possible futures – Impact assessment and agile regulations to prepare for the downstream effects of existing regulation and for future AI developments – International cooperation to align standards and risk taxonomies and facilitate the sharing of knowledge and infrastructure"
-
Check out this massive global research study into the use of generative AI involving over 48,000 people in 47 countries - excellent work by KPMG and the University of Melbourne! Key findings: 𝗖𝘂𝗿𝗿𝗲𝗻𝘁 𝗚𝗲𝗻 𝗔𝗜 𝗔𝗱𝗼𝗽𝘁𝗶𝗼𝗻 - 58% of employees intentionally use AI regularly at work (31% weekly/daily) - General-purpose generative AI tools are most common (73% of AI users) - 70% use free public AI tools vs. 42% using employer-provided options - Only 41% of organizations have any policy on generative AI use 𝗧𝗵𝗲 𝗛𝗶𝗱𝗱𝗲𝗻 𝗥𝗶𝘀𝗸 𝗟𝗮𝗻𝗱𝘀𝗰𝗮𝗽𝗲 - 50% of employees admit uploading sensitive company data to public AI - 57% avoid revealing when they use AI or present AI content as their own - 66% rely on AI outputs without critical evaluation - 56% report making mistakes due to AI use 𝗕𝗲𝗻𝗲𝗳𝗶𝘁𝘀 𝘃𝘀. 𝗖𝗼𝗻𝗰𝗲𝗿𝗻𝘀 - Most report performance benefits: efficiency, quality, innovation - But AI creates mixed impacts on workload, stress, and human collaboration - Half use AI instead of collaborating with colleagues - 40% sometimes feel they cannot complete work without AI help 𝗧𝗵𝗲 𝗚𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 𝗚𝗮𝗽 - Only half of organizations offer AI training or responsible use policies - 55% feel adequate safeguards exist for responsible AI use - AI literacy is the strongest predictor of both use and critical engagement 𝗚𝗹𝗼𝗯𝗮𝗹 𝗜𝗻𝘀𝗶𝗴𝗵𝘁𝘀 - Countries like India, China, and Nigeria lead global AI adoption - Emerging economies report higher rates of AI literacy (64% vs. 46%) 𝗖𝗿𝗶𝘁𝗶𝗰𝗮𝗹 𝗤𝘂𝗲𝘀𝘁𝗶𝗼𝗻𝘀 𝗳𝗼𝗿 𝗟𝗲𝗮𝗱𝗲𝗿𝘀 - Do you have clear policies on appropriate generative AI use? - How are you supporting transparent disclosure of AI use? - What safeguards exist to prevent sensitive data leakage to public AI tools? - Are you providing adequate training on responsible AI use? - How do you balance AI efficiency with maintaining human collaboration? 𝗔𝗰𝘁𝗶𝗼𝗻 𝗜𝘁𝗲𝗺𝘀 𝗳𝗼𝗿 𝗢𝗿𝗴𝗮𝗻𝗶𝘇𝗮𝘁𝗶𝗼𝗻𝘀 - Develop clear generative AI policies and governance frameworks - Invest in AI literacy training focusing on responsible use - Create psychological safety for transparent AI use disclosure - Implement monitoring systems for sensitive data protection - Proactively design workflows that preserve human connection and collaboration 𝗔𝗰𝘁𝗶𝗼𝗻 𝗜𝘁𝗲𝗺𝘀 𝗳𝗼𝗿 𝗜𝗻𝗱𝗶𝘃𝗶𝗱𝘂𝗮𝗹𝘀 - Critically evaluate all AI outputs before using them - Be transparent about your AI tool usage - Learn your organization's AI policies and follow them (if they exist!) - Balance AI efficiency with maintaining your unique human skills You can find the full report here: https://lnkd.in/emvjQnxa All of this is a heavy focus for me within Advisory (AI literacy/fluency, AI policies, responsible & effective use, etc.). Let me know if you'd like to connect and discuss. 🙏 #GenerativeAI #WorkplaceTrends #AIGovernance #DigitalTransformation
-
🎯 The CIO's Organizational Playbook for the AI Era... I recently spoke with a CIO friend about how IT teams are changing. Our discussion made me think about what sets apart IT teams that succeed with AI from those that don’t. I looked over my research and reviewed my interviews with other leaders. This information is too valuable not to share: ✓ Build AI-Ready Capabilities 🟢 Establish continuous learning programs focused on practical AI applications 🟢 Implement cross-functional training to bridge technical/business gaps 🟢 Prioritize hands-on AI workshops over theoretical certifications ✓ Master AI Risk Management 🟢 Develop processes to identify and mitigate technical failures early 🟢 Create a strategic AI roadmap with clear risk contingency protocols 🟢 Align all AI initiatives with broader business objectives ✓ Drive Stakeholder Engagement 🟢 Build a cross-functional AI coalition (executives, HR, business units) 🟢 Communicate AI initiatives with transparency to reduce resistance 🟢 Document tangible benefits to secure continued buy-in ✓ Implement with Agility 🟢 Replace waterfall approaches with iterative AI development 🟢 Focus on quick prototyping and real-world testing 🟢 Ensure infrastructure scalability supports AI growth ✓ Lead with AI Ethics 🟢 Train teams on bias identification and mitigation techniques 🟢 Establish clear governance frameworks with accountability 🟢 Make responsible AI deployment non-negotiable ✓ Transform Your Talent Strategy 🟢 Enhance IT roles to integrate AI responsibilities 🟢 Create peer mentoring programs pairing AI experts with domain specialists 🟢 Cultivate an AI-positive culture through early wins ✓ Measure What Matters 🟢 Set specific AI KPIs that link directly to business outcomes 🟢 Implement continuous feedback loops for ongoing refinement 🟢 Track both technical metrics and organizational adoption rates The organizations mastering these elements aren't just surviving the AI transition—they're thriving because of it. #digitaltransformation #changemanagement #leadership #CIO
-
#AI literacy has evolved from luxury to necessity. Under the EU AI Act, companies have until February 1, 2025 to comply with the Article 4 requirements. What does that mean? They must “take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff” and those acting on their behalf. While there’s little detail on the specifics, the intent is clear: enable those who develop, deploy, and use AI to better understand the technology and, in turn, make more informed decisions to maximize its potential benefits and minimize its potential risks. Here are some framing principles: ▶ Go beyond the basics. A baseline is necessary, but only a starting point. ▶ Appreciate that literacy is multi-dimensional. It should span the swirling mix of technical, business, practical, and ethical implications of AI. ▶ Appreciate that it’s also contextual. There is no one-size-fits-all approach. Instead, literacy should be tailored to different roles to account for different responsibilities, and be cross-functional to reflect the real-world collaboration that #AIgovernance demands. ▶ Prepare for a never-ending journey. The field of AI is dynamic, and continuous learning is critical to stay up-to-date on developments, trends, industry standards, and best practices. Here are some steps to take: ✅ Assess current literacy levels. ✅ Emphasize inclusivity (e.g., because not everyone will be starting at the same place). ✅ Take a holistic, programmatic approach, with foundational content supplemented by tailored learning paths. ✅ Identify champions to embrace the initiative and welcome volunteers who want to contribute to the cause. ✅ Create on-going education opportunities (e.g., through awareness campaigns, reminders, and refreshers). ✅ Create and share resources to supplement training (e.g., newsletters, blogs, and guides). ✅ Consider third-party resources to augment capabilities and broaden horizons (e.g., like those from the IAPP for the #AIGP, or ones I shared here https://lnkd.in/eirmKxD8). ✅ Regularly monitor progress and assess effectiveness. ✅ Document everything for auditability and accountability. Ultimately, embedding AI literacy within your company isn't just a check-box for compliance. It’s how you build a modern workforce to drive responsible innovation and unlock sustainable growth.
-
This new white paper "Steps Toward AI Governance" summarizes insights from the 2024 EqualAI Summit, cosponsored by RAND in D.C. in July 2024, where senior executives discussed AI development and deployment, challenges in AI governance, and solutions for these issues across government and industry sectors. Link: https://lnkd.in/giDiaCA3 * * * The white paper outlines several technical and organizational challenges that impact effective AI governance: Technical Challenges: 1) Evaluation of External Models: Difficulties arise in assessing externally sourced AI models due to unclear testing standards and development transparency, in contrast to in-house models, which can be customized and fine-tuned to fit specific organizational needs. 2) High-Risk Use Cases: Prioritizing the evaluation of AI use cases with high risks is challenging due to the diverse and unpredictable outputs of AI, particularly generative AI. Traditional evaluation metrics may not capture all vulnerabilities, suggesting a need for flexible frameworks like red teaming. Organizational Challenges: 1) Misaligned Incentives: Organizational goals often conflict with the resource-intensive demands of implementing effective AI governance, particularly when not legally required. Lack of incentives for employees to raise concerns and the absence of whistleblower protections can lead to risks being overlooked. 2) Company Culture and Leadership: Establishing a culture that values AI governance is crucial but challenging. Effective governance requires authority and buy-in from leadership, including the board and C-suite executives. 3) Employee Buy-In: Employee resistance, driven by job security concerns, complicates AI adoption, highlighting the need for targeted training. 4) Vendor Relations: Effective AI governance is also impacted by gaps in technical knowledge between companies and vendors, leading to challenges in ensuring appropriate AI model evaluation and transparency. * * * Recommendations for Companies: 1) Catalog AI Use Cases: Maintain a centralized catalog of AI tools and applications, updated regularly to track usage and document specifications for risk assessment. 2) Standardize Vendor Questions: Develop a standardized questionnaire for vendors to ensure evaluations are based on consistent metrics, promoting better integration and governance in vendor relationships. 3) Create an AI Information Tool: Implement a chatbot or similar tool to provide clear, accessible answers to AI governance questions for employees, using diverse informational sources. 4) Foster Multistakeholder Engagement: Engage both internal stakeholders, such as C-suite executives, and external groups, including end users and marginalized communities. 5) Leverage Existing Processes: Utilize established organizational processes, such as crisis management and technical risk management, to integrate AI governance more efficiently into current frameworks.
-
President Biden’s recent Executive Order on AI leaves one key issue open that remains top of mind for most organizations today – data privacy. The order calls Congress to pass “bipartisan data privacy legislation” to protect Americans’ data. As we embrace the power of AI, we must also recognize the morphing challenges of data privacy in the context of data sovereignty. The rules are constantly changing, and organizations need flexibility to maintain compliance just in their home countries but also in every country in which they operate. Governments worldwide, from the European Union with its GDPR to India's Personal Data Protection Bill, are setting stringent regulations to protect their citizens' data. The essence? Data about a nation's citizens or businesses should only reside on systems within their legal and regulatory purview. We all know AI is a game-changer but also a voracious consumer of data and a complicating factor for data sovereignty. Especially with Generative AI, which consumes data indiscriminately, often stored and processed at the AI companies' discretion. This collision between AI's insatiable appetite for data, the temptation for organizations to use it, and global data sovereignty regulations present a unique challenge for businesses. With the right approach, businesses can harness the power of AI while respecting data sovereignty. Here are a few ideas on how: Mindset: Make data sovereignty a company-wide priority. It's not just an IT or legal concern; it's a business imperative. Every team member should understand the risks associated with non-compliance. Inventory: Know your data. With large enterprises storing data in over 800 applications on average, it's crucial to maintain an inventory of your company's data and be aware of the vendors interacting with it. Governance: Stay updated with regional data laws and ensure compliance. Data sovereignty requires governance to be local also. Vendor Compliance: Your external vendors should be in lockstep with your data policies. Leverage Data Unification Solutions: Use flexible, scalable tools to ensure data sovereignty compliance. Data unification and management tools powered by AI can detect data leakages, trace data lineage, and ensure data remains within stipulated borders. I’ve witnessed how this can be accomplished in many industries, including healthcare. Despite stringent privacy and sovereignty policies, many healthcare management systems demonstrate that robust data management, compliant with regulations, is achievable. The key is designing systems with data management policies from the outset. To all global organizations: Embrace the future, but let's do it responsibly. Data privacy and sovereignty are not a hurdle; it's a responsibility we must uphold for the trust of our customers and the integrity of our businesses. Planning for inevitable changes now will pay dividends in the future. #data
-
✳ Bridging Ethics and Operations in AI Systems✳ Governance for AI systems needs to balance operational goals with ethical considerations. #ISO5339 and #ISO24368 provide practical tools for embedding ethics into the development and management of AI systems. ➡Connecting ISO5339 to Ethical Operations ISO5339 offers detailed guidance for integrating ethical principles into AI workflows. It focuses on creating systems that are responsive to the people and communities they affect. 1. Engaging Stakeholders Stakeholders impacted by AI systems often bring perspectives that developers may overlook. ISO5339 emphasizes working with users, affected communities, and industry partners to uncover potential risks and ensure systems are designed with real-world impact in mind. 2. Ensuring Transparency AI systems must be explainable to maintain trust. ISO5339 recommends designing systems that can communicate how decisions are made in a way that non-technical users can understand. This is especially critical in areas where decisions directly affect lives, such as healthcare or hiring. 3. Evaluating Bias Bias in AI systems often arises from incomplete data or unintended algorithmic behaviors. ISO5339 supports ongoing evaluations to identify and address these issues during development and deployment, reducing the likelihood of harm. ➡Expanding on Ethics with ISO24368 ISO24368 provides a broader view of the societal and ethical challenges of AI, offering additional guidance for long-term accountability and fairness. ✅Fairness: AI systems can unintentionally reinforce existing inequalities. ISO24368 emphasizes assessing decisions to prevent discriminatory impacts and to align outcomes with social expectations. ✅Transparency: Systems that operate without clarity risk losing user trust. ISO24368 highlights the importance of creating processes where decision-making paths are fully traceable and understandable. ✅Human Accountability: Decisions made by AI should remain subject to human review. ISO24368 stresses the need for mechanisms that allow organizations to take responsibility for outcomes and override decisions when necessary. ➡Applying These Standards in Practice Ethical considerations cannot be separated from operational processes. ISO24368 encourages organizations to incorporate ethical reviews and risk assessments at each stage of the AI lifecycle. ISO5339 focuses on embedding these principles during system design, ensuring that ethics is part of both the foundation and the long-term management of AI systems. ➡Lessons from #EthicalMachines In "Ethical Machines", Reid Blackman, Ph.D. highlights the importance of making ethics practical. He argues for actionable frameworks that ensure AI systems are designed to meet societal expectations and business goals. Blackman’s focus on stakeholder input, decision transparency, and accountability closely aligns with the goals of ISO5339 and ISO24368, providing a clear way forward for organizations.
-
Your AI pipeline is only as strong as the paper trail behind it Picture this: a critical model makes a bad call, regulators ask for the “why,” and your team has nothing but Slack threads and half-finished docs. That is the accountability gap the Alan Turing Institute’s new workbook targets. Why it grabbed my attention • Answerability means every design choice links to a name, a date, and a reason. No finger pointing later • Auditability demands a living log from data pull to decommission that a non-technical reviewer can follow in plain language • Anticipatory action beats damage control. Governance happens during sprint planning, not after the press release How to put this into play 1. Spin up a Process Based Governance log on day one. Treat it like version-controlled code 2. Map roles to each governance step, then test the chain. Can you trace a model output back to the feature engineer who added the variable 3. Schedule quarterly “red team audits” where someone outside the build squad tries to break the traceability. Gaps become backlog items The payoff Clear accountability strengthens stakeholder trust, slashes regulatory risk, and frees engineers to focus on better models rather than post hoc excuses. If your AI program cannot answer, “Who owns this decision and how did we get here” you are not governing. You are winging it. Time to upgrade. When the next model misfires, will your team have an audit trail or an alibi?
-
Here are five things as a CX leader you can put into practice this year to set up your org for success around AI and governance: Conduct a Comprehensive AI Audit: Start by assessing your current AI systems for regulatory compliance and risk. Map out data flows, algos, and decision-making processes to identify vulnerabilities. Establish a CX-AI Governance Committee: Create a cross-functional team to oversee AI initiatives and ensure alignment with emerging regulations that are focused on the customer. This should include stakeholders from legal, IT, Risk, etc. Develop Transparent Data and Model Governance Policies: Implement strict data management practices that document data sources, usage, and model decisions. Transparent policies build customer trust and facilitate audits. Invest in Ongoing Staff Training: Ensure that your CX team is well-versed in AI regulations, ethical considerations, and compliance standards. This one is sometimes overlooked, but critical. Engage with Industry Stakeholders: You should try to participate in industry forums, regulatory consultations, and standard-setting organizations. This engagement will not only keep you updated on the latest regulatory developments but also provide opportunities to influence the future of AI governance. (I follow updates from entities like the OECD AI Policy Observatory as an example.) #ai #customerexperience #regulation