Recent Global Developments in AI Policy

Explore top LinkedIn content from expert professionals.

Summary

The term “recent-global-developments-in-ai-policy” refers to the ongoing evolution of policies and regulations worldwide aimed at governing artificial intelligence (AI). These developments focus on creating frameworks to ensure AI technology is safe, transparent, equitable, and beneficial for society while addressing concerns like data privacy, bias, and ethical accountability.

  • Monitor evolving regulations: Stay updated on policy changes such as the EU’s General-Purpose AI Code of Practice or the G7 Toolkit for AI to align your AI strategies with compliance requirements.
  • Prioritize ethical AI practices: Implement measures like bias testing, data governance, and transparency frameworks to mitigate risks and foster trust in AI-driven decisions.
  • Engage in global collaboration: Participate in international initiatives and standards to help shape the future of responsible and innovative AI development.
Summarized by AI based on LinkedIn member posts
  • View profile for Peter Slattery, PhD
    Peter Slattery, PhD Peter Slattery, PhD is an Influencer

    MIT AI Risk Initiative | MIT FutureTech

    64,310 followers

    "Following the Seoul AI Safety Summit, we have seen the announcement of a substantial network of state-run AI Safety Institutes (AISIs) across the globe. What progress has been made? How do their plans and motivations differ? And what can we learn about how to set up AISIs effectively? This brief analyses the development, structure, and goals of the first wave of AISIs. Key findings: Diverse Approaches: Countries have adopted varied strategies in establishing their AISIs, ranging from building new institutions (UK, US) to repurposing existing ones (EU, Singapore). Funding Disparities: Significant variations in funding levels may impact the relative influence and capabilities of different AISIs. The UK leads with £100 million secured until 2030, while others like the US face funding uncertainties. International Cooperation: While AISIs aim to foster global collaboration, tensions between national interests and international cooperation remains a challenge for AI governance. Efforts like the UK-US partnership on model evaluations highlight potential for effective cross-border cooperation. Regulatory Approaches: There’s a spectrum from voluntary commitments (UK, US) to hard regulation (EU), with ongoing debates about the most effective approach for ensuring AI safety while fostering innovation. Focus Areas: Most AISIs are prioritising AI model evaluations, standard-setting, and international coordination. However, the specific risks and research areas vary among institutions. Future Uncertainties: The evolving nature of AI technology and relevant geopolitical factors create significant uncertainties for the future roles and impacts of AISIs. Adaptability will be key to their continued relevance and effectiveness." This work from The International Center for Future Generations - ICFG is quite helpful for understanding the existing institutes and their overlaps and differences. Link in comments.

  • View profile for Prukalpa ⚡
    Prukalpa ⚡ Prukalpa ⚡ is an Influencer

    Founder & Co-CEO at Atlan | Forbes30, Fortune40, TED Speaker

    46,727 followers

    The EU just said "no brakes" on AI regulation. Despite heavy pushback from tech giants like Apple, Meta, and Airbus, the EU pressed forward last week with its General-Purpose AI Code of Practice. Here's what's coming: → General-purpose AI systems (think GPT, Gemini, Claude) need to comply by August 2, 2025. → High-risk systems (biometrics, hiring tools, critical infrastructure) must meet regulations by 2026. → Legacy and embedded tech systems will have to comply by 2027. If you’re a Chief Data Officer, here’s what should be on your radar: 1. Data Governance & Risk Assessment: Clearly map your data flows, perform thorough risk assessments similar to those required under GDPR, and carefully document your decisions for audits. 2. Data Quality & Bias Mitigation: Ensure your data is high-quality, representative, and transparently sourced. Responsibly manage sensitive data to mitigate biases effectively. 3. Transparency & Accountability: Be ready to trace and explain AI-driven decisions. Maintain detailed logs and collaborate closely with legal and compliance teams to streamline processes. 4. Oversight & Ethical Frameworks: Implement human oversight for critical AI decisions, regularly review and test systems to catch issues early, and actively foster internal AI ethics education. These new regulations won’t stop at Europe’s borders. Like GDPR, they're likely to set global benchmarks for responsible AI usage. We're entering a phase where embedding governance directly into how organizations innovate, experiment, and deploy data and AI technologies will be essential.

  • View profile for Eugina Jordan

    CEO and Founder YOUnifiedAI I 8 granted patents/16 pending I AI Trailblazer Award Winner

    41,191 followers

    The G7 Toolkit for Artificial Intelligence in the Public Sector, prepared by the OECD.AI and UNESCO, provides a structured framework for guiding governments in the responsible use of AI and aims to balance the opportunities & risks of AI across public services. ✅ a resource for public officials seeking to leverage AI while balancing risks. It emphasizes ethical, human-centric development w/appropriate governance frameworks, transparency,& public trust. ✅ promotes collaborative/flexible strategies to ensure AI's positive societal impact. ✅will influence policy decisions as governments aim to make public sectors more efficient, responsive, & accountable through AI. Key Insights/Recommendations: 𝐆𝐨𝐯𝐞𝐫𝐧𝐚𝐧𝐜𝐞 & 𝐍𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐒𝐭𝐫𝐚𝐭𝐞𝐠𝐢𝐞𝐬: ➡️importance of national AI strategies that integrate infrastructure, data governance, & ethical guidelines. ➡️ different G7 countries adopt diverse governance structures—some opt for decentralized governance; others have a single leading institution coordinating AI efforts. 𝐁𝐞𝐧𝐞𝐟𝐢𝐭𝐬 & 𝐂𝐡𝐚𝐥𝐥𝐞𝐧𝐠𝐞𝐬 ➡️ AI can enhance public services, policymaking efficiency, & transparency, but governments to address concerns around security, privacy, bias, & misuse. ➡️ AI usage in areas like healthcare, welfare, & administrative efficiency demonstrates its potential; ethical risks like discrimination or lack of transparency are a challenge. 𝐄𝐭𝐡𝐢𝐜𝐚𝐥 𝐆𝐮𝐢𝐝𝐞𝐥𝐢𝐧𝐞𝐬 & 𝐅𝐫𝐚𝐦𝐞𝐰𝐨𝐫𝐤𝐬 ➡️ focus on human-centric AI development while ensuring fairness, transparency, & privacy. ➡️Some members have adopted additional frameworks like algorithmic transparency standards & impact assessments to govern AI's role in decision-making. 𝐏𝐮𝐛𝐥𝐢𝐜 𝐒𝐞𝐜𝐭𝐨𝐫 𝐈𝐦𝐩𝐥𝐞𝐦𝐞𝐧𝐭𝐚𝐭𝐢𝐨𝐧 ➡️provides a phased roadmap for developing AI solutions—from framing the problem, prototyping, & piloting solutions to scaling up and monitoring their outcomes. ➡️ engagement + stakeholder input is critical throughout this journey to ensure user needs are met & trust is built. 𝐄𝐱𝐚𝐦𝐩𝐥𝐞𝐬 𝐨𝐟 𝐀𝐈 𝐢𝐧 𝐔𝐬𝐞 ➡️Use cases include AI tools in policy drafting, public service automation, & fraud prevention. The UK’s Algorithmic Transparency Recording Standard (ATRS) and Canada's AI impact assessments serve as examples of operational frameworks. 𝐃𝐚𝐭𝐚 & 𝐈𝐧𝐟𝐫𝐚𝐬𝐭𝐫𝐮𝐜𝐭𝐮𝐫𝐞: ➡️G7 members to open up government datasets & ensure interoperability. ➡️Countries are investing in technical infrastructure to support digital transformation, such as shared data centers and cloud platforms. 𝐅𝐮𝐭𝐮𝐫𝐞 𝐎𝐮𝐭𝐥𝐨𝐨𝐤 & 𝐈𝐧𝐭𝐞𝐫𝐧𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐂𝐨𝐥𝐥𝐚𝐛𝐨𝐫𝐚𝐭𝐢𝐨𝐧: ➡️ importance of collaboration across G7 members & international bodies like the EU and Global Partnership on Artificial Intelligence (GPAI) to advance responsible AI. ➡️Governments are encouraged to adopt incremental approaches, using pilot projects & regulatory sandboxes to mitigate risks & scale successful initiatives gradually.

  • View profile for Katharina Koerner

    AI Governance & Security I Trace3 : All Possibilities Live in Technology: Innovating with risk-managed AI: Strategies to Advance Business Goals through AI Governance, Privacy & Security

    44,353 followers

    On May 28, 2024, the Science, Innovation and Technology Select Committee, appointed by the UK House of Commons, published a report on the governance of AI, reviewing developments in AI governance and regulation since an earlier interim report in August 2023: https://lnkd.in/gX4nZrk9 The report underscores the necessity of fundamentally rethinking the approach to AI, particularly addressing the challenges posed by AI systems that operate as "black boxes" with opaque decision-making processes. It stresses the importance of robust testing of AI outputs to ensure accuracy and fairness when the internal workings of these systems are unclear. The report also highlights challenges in regulatory oversight, noting the difficulties faced by a newly established AI Safety Institute in accessing AI models for safety testing, as previously agreed upon by developers. It calls for future government action to enforce compliance and potentially name non-compliant developers. The document concludes by emphasizing the need for an urgent policy response to keep pace with AI's rapid development. It noted that optimal solutions for AI's challenges aren't always clear. In this context, the report identified "Twelve Challenges of AI Governance" and proposed initial solutions (see p. 89ff): 1. Bias Challenge: Addressing inherent biases in AI datasets and ensuring fair outcomes. 2. Privacy Challenge: Balancing privacy with the benefits of AI, particularly in sensitive areas like law enforcement. 3. Misrepresentation Challenge: Addressing the misuse of AI in creating deceptive content, including deepfakes. 4. Access to Data Challenge: Ensuring open and fair access to data necessary for AI development. 5. Access to Compute Challenge: Providing equitable access to computing resources for AI research and development. 6. Black Box Challenge: Accepting that some AI processes may remain unexplainable and focusing on validating their outputs. 7. Open-Source Challenge: Balancing open and proprietary approaches to AI development to encourage innovation while maintaining competitive markets. 8. Intellectual Property and Copyright Challenge: Developing a fair licensing framework for the use of copyrighted material in training AI. 9. Liability Challenge: Clarifying liability for harms caused by AI, ensuring accountability across the supply chain. 10. Employment Challenge: Preparing the workforce for the AI-driven economy through education and skill development. 11. International Coordination Challenge: Addressing the global nature of AI development and governance without necessarily striving for a unified global framework. 12. Existential Challenge: Considering the long-term existential risks posed by AI and focusing regulatory activity on immediate impacts while being prepared for future risks. Thank you, Chris Kraft, for posting - follow his incredibly helpful posts around AI Gov, and AI in the public sphere.

  • View profile for Kevin Fumai

    Asst. General Counsel @ Oracle ǀ AI Governance

    33,536 followers

    So much happens so quickly in #AIgovernance that I’ve decided to launch a Month in Review. This will only spotlight the key developments that should be on your radar. With that, here’s my Top 10 for January: ▶️ The first International AI Safety Report was published. It synthesizes the state of scientific understanding of general-purpose AI, with a focus on managing its current and emerging risks. It’s a must-read filled with technical rigor, balanced policy perspectives, and tangible recommendations. 🔗 https://lnkd.in/e7vupCba ▶️ President Trump started the US down a new path by revoking the foundational 2023 executive order and directing his administration to develop an AI action plan within 180 days.  The National AI Advisory Committee promptly provided a 10-point framework. 🔗 https://lnkd.in/ehzErwiK (EO) 🔗 https://lnkd.in/exNjVb5y (NAIAC) ▶️ The US Copyright Office released a report on the copyrightability of AI-generated works, with nine conclusions or recommendations (and significant supporting research). 🔗 https://lnkd.in/eJhzRNfV ▶️ DeepSeek launched R1, captured attention, created confusion, and sparked concerns. And the global gyrations (and governance implications) are just beginning. 🔗 https://lnkd.in/eHNGQqtM ▶️ The EU AI Office unveiled a draft template that would require GPAI model providers to disclose a “sufficiently detailed summary” of the data used to train their models, including sources. 🔗 https://lnkd.in/e3rz8Zpi ▶️ California's Attorney General issued AI advisories informing consumers of their rights and companies of their obligations under existing law. This theme continues to resonate around the world, with many other regulators offering similar reminders. 🔗 https://lnkd.in/eFyazZDq ▶️ The US FTC finalized a settlement with IntelliVision over claims related to its facial recognition software. While not expressly tied to Operation AI Comply, the case serves as another example of how existing laws apply to AI and how regulatory enforcement will likely progress. 🔗 https://lnkd.in/efV3T5u6 ▶️ The Netherlands updated its AI impact assessment template, offering a new glimpse into the EU AI Act requirement. 🔗 https://lnkd.in/eURuYdKK ▶️ The US FDA proposed guidelines for AI-enabled medical devices and drug development. While not yet finalized, they signal support for innovation so long as rigorous scientific and regulatory standards are satisfied. 🔗 https://lnkd.in/e9eNVrXB (devices) 🔗 https://lnkd.in/epN64-6q (drugs) ▶️ The World Economic Forum released an “Industries in the Intelligent Age” Series, with detailed snapshots of AI’s applications and best practices across seven sectors. 🔗 https://lnkd.in/evRFN7ZB

  • View profile for Pradeep Sanyal

    Enterprise AI Leader | Former CIO & CTO | Chief AI Officer (Advisory) | Data & AI Strategy → Implementation | 0→1 Product Launch

    19,121 followers

    AI governance often sounds abstract. But across the world, small bright spots are showing us what operational responsible AI can look like. → Singapore is building open-source AI testing tools to align compliance with practical validation, moving beyond checklists to scalable assurance. → Chile mandates public procurement of AI to require proof of fairness, bias testing, and data protection. Procurement is quietly becoming a lever for responsible AI markets. → Costa Rica is incubating feminist AI research to embed gender inclusion into design and policy, not as an afterthought. → Mexico is using machine learning to preserve endangered languages, reminding us that AI can strengthen, not erase, cultural diversity. → Croatia amended labor laws to regulate algorithmic management, ensuring workers have rights in the age of automated decision-making. These aren’t moonshots. They are practical interventions that operationalize responsible AI today. What connects these stories is their grounding in the Global Index on Responsible AI, a first-of-its-kind effort assessing 138 countries on their progress, gaps, and pathways toward rights-respecting, human-centered AI. Most countries have national AI strategies. Few translate them into enforceable, actionable protections for citizens, workers, or communities. The GIRAI report is a reminder: frameworks are not enough. Responsible AI requires measurable, enforceable action.

Explore categories