🚨 The latest NAIC AI/ML Survey is out (Health Insurance) — and it's a must-read for anyone serious about responsible AI in insurance. 🚨 As a longtime follower of the NAIC’s evolving body of work on artificial intelligence and machine learning across insurance lines — Life, Auto, Homeowners, and now Health — I can say with confidence: this latest Health AI/ML Survey Report is one of the most comprehensive looks at how AI is actually being used in the real world. Key takeaways from the Health AI Survey: ✅ 84% of surveyed health insurers are actively using AI/ML ✅ AI is being applied in utilization management, prior auth, disease management, fraud detection, and sales experiences ✅ 92% of respondents have adopted AI governance aligned with NAIC AI Principles ✅ Companies are testing for bias, drift, equity, and integrating human oversight into AI decisions But here’s the bigger story: 💡 This is now the third major NAIC AI/ML survey, building on similar in-depth studies of Life, Auto and Homeowners insurers. That means we now have a multi-line, multi-sector view of how AI is shaping decisions across regulated insurance products — from pricing to fraud detection, from underwriting to claims. 🧭 For anyone in a regulated industry — whether in finance, insurance, healthcare or beyond — these surveys offer a blueprint: - How to govern AI responsibly - How to build transparency and oversight - How to align with emerging state and federal expectations - And how to balance innovation with consumer protection 🎯 My Key Message: Don’t just chase the hype around “agentic AI.” If you're building for the future — master the AI ML fundamentals first. Most of the real impact today is still being driven by traditional supervised and unsupervised machine learning — classification, prediction, anomaly detection, recommendation — not just autonomous agents. Understanding core AI/ML techniques, governance, and ethics is still the foundation..... What's next? The NAIC is exploring a model law/regulation on AI governance. Stakeholder input is being requested now — a critical moment to help shape the future of AI oversight. This is more than an insurance story. It’s a signal of what responsible AI regulation can (and should) look like. Full report link in the comments. #AIinInsurance #NAIC #ResponsibleAI #Insurance #Governance #InsurTech #MachineLearning #ConsumerProtection
Public oversight of insurance algorithms
Explore top LinkedIn content from expert professionals.
Summary
Public oversight of insurance algorithms refers to the monitoring and regulation of how insurance companies use artificial intelligence and data-driven tools to make decisions about coverage, pricing, and claims. As these algorithms increasingly impact access to healthcare and costs, stakeholders are pushing for greater transparency, fairness, and accountability to protect consumers.
- Demand transparency: Encourage insurers to openly explain how their algorithms make decisions, ensuring consumers understand how coverage and pricing are determined.
- Support human review: Advocate for policies that require qualified professionals to review AI decisions, especially those that affect patient care or access to services.
- Monitor for fairness: Urge regular audits of insurance algorithms to detect and prevent biased or discriminatory outcomes in plan administration and claims processing.
-
-
Attention, #insurance professionals! The global insurance watchdog, IAIS, has just launched a public consultation on the draft Application Paper on the supervision of artificial intelligence. The application paper covers four broad sections: 1. Governance and accountability: this includes the need to integrate AI into risk management systems, provide human oversight of AI risks and considerations around the use of third parties. 2. Robustness, safety and security: this considers issues related to the robustness, safety and security of AI systems. 3. Transparency and explainability: this section sets out the need for AI outcomes to be explainable and tailored to the need of different stakeholders. 4. Fairness, ethics and redress: this section includes the need for fairness by design, monitoring of outcomes and adequate redress mechanisms. It also highlights the need for supervisors and insurers to consider the broad societal impacts of granular risk pricing on the principle of risk pooling. Feedback on the document is invited by 17 February 2025. Don’t miss this opportunity to share your insights! __________ 👉 For latest InsurTech regulatory and policy news subscribe to my insurtech4good.com newsletter. ♻️ Re-share this to help your colleagues stay ahead in InsurTech.
-
See, insurers of #Doctors and #Hospitals require their insured customers to only use approved Software as a Medical Device AI in Healthcare practice, otherwise they need to do all the approval work themselves incl. V&V. The IAIS Draft Application Paper on the Supervision of Artificial Intelligence does not explicitly require the formal approval or certification of AI systems before use. Instead, it provides guidance to supervisors and insurers on how to apply existing Insurance Core Principles (ICPs) to ensure that AI systems are implemented and managed responsibly. The focus is on governance, accountability, risk management, and compliance, rather than mandating a specific approval process for AI systems. Key Aspects Relevant to AI “Approval”: 1. Supervisory Oversight: Insurers remain responsible for understanding and managing AI systems, including those developed by third-party providers. Supervisors are encouraged to use tools like audits, surveys, and risk-based frameworks to evaluate AI systems. 2. Risk-Based Approach: The proportionality principle allows supervision to focus more rigorously on AI systems with higher risks, such as those affecting underwriting or claims decisions. Insurers must assess and manage AI-related risks, including fairness, bias, transparency, and cybersecurity. 3. Governance Requirements: Insurers must establish strong governance frameworks for AI, covering aspects like traceability, explainability, and fairness. These frameworks ensure that AI systems operate within ethical and legal boundaries. 4. Accountability: AI systems do not operate autonomously from regulatory frameworks. Insurers using AI must document processes, maintain traceability, and ensure that human oversight and accountability mechanisms are in place. For third-party systems, insurers are expected to obtain sufficient information about the AI’s development and performance, ensuring compliance with regulatory standards. 5. Fairness and Ethics: AI systems must adhere to principles of fairness, avoiding discrimination or biased outcomes. Supervisors are tasked with ensuring these principles are embedded in insurers’ processes. Implications for Healthcare AI: For healthcare applications, such as tools used in diagnostics, claims processing, or risk assessments: Supervisors could evaluate whether the AI systems comply with existing regulations (e.g., GDPR for data protection, medical device regulations where applicable). AI systems may indirectly need to meet requirements tied to insurer obligations, including explainability, fairness, and consumer protection. Specific healthcare regulatory bodies (like the FDA in the US) may have their own approval processes for AI classified as medical devices, which might interact with insurance-specific supervision. #ai #artificialintelligence #samd #aiamd #medicaldevices #regulatoryaffairs #regulation #regulatorycompliance #healthcare #digitalhealth #research #eo #euaiact #hhs #fda #scientificresearch #llm
-
Senator Amy Klobuchar has sent a letter to the Attorney General's office and to the Federal Trade Commission asking for an investigation into the use of algorithm pricing tools by health insurers. "Recent reporting has indicated that firms may be using algorithmic tools to undermine competition and push additional costs onto patients that receive healthcare out of their insurance network," Klobuchar wrote, citing New York Times reporting. In the letter, Klobuchar specifically cited MultiPlan, a Massachusetts-based company specializing in claim-cost management. The firm sells data to help insurance companies determine how much they should pay providers for out-of-network medical care, and how much of that cost is passed on to patients, Klobuchar said in the letter, again citing the New York Times report. "While it is common for patients to pay different rates for out-of-network care, I am concerned that –rather than competing for business from employers by reducing these costs to employees – algorithmic tools are processing data gathered across numerous competitors to subvert competition among insurance companies," Klobuchar wrote to the AG and FTC. "The result is that – instead of competing with each other – insurance companies are pushing additional hidden costs on to employees and patients." Klobuchar has introduced legislation called the Preventing Algorithmic Collusion Act of 2024 to stop anticompetitive conduct via pricing algorithms. https://lnkd.in/gDpWCzps
-
Could the Office for Civil Rights be another avenue for healthcare professionals or organizations to contact surrounding denial of healthcare resulting from health plans' use of AI? ➡️ OCR sends a "Dear Colleague" letter: Ensuring Nondiscrimination Through the Use of Artificial Intelligence ➡️ OCR suggests authority to regulate and enforce use of AI that could result in discriminatory outcomes: "OCR is a federal regulator and law enforcement agency that is uniquely positioned to safeguard the public’s trust in the use of AI and other emerging technologies in health care." ➡️ "tools, including AI, are used by... payers (e.g., health insurance issuers), in their health programs and activities for functions like screening, risk prediction, diagnosis, prognosis, clinical decision-making, treatment planning, health care operations, and allocation of resources, all of which affect the care that individuals receive." ➡️ OCR states a plan and factors to assess covered entity (including health plan) use of AI ➡️ Health plans should: "Utilize staff to override and report potentially discriminatory decisions made by a patient care decision support tool, including a mechanism for ensuring “human in the loop” review of a tool’s decision by a qualified human professional" ➡️ and "Audit performance of tools in “real world” scenarios and monitor the tool for discrimination" https://lnkd.in/gSQHa9KM
-
New York State DFS is looking for comments on a proposed circular letter that outlines proper risk management for AI systems and external data used in insurance underwriting. The "Proposed Insurance Circular Letter" addresses the use of Artificial Intelligence Systems (AIS) and External Consumer Data and Information Sources (ECDIS) in insurance underwriting and pricing. The key points include: 💡 Purpose and Background: The DFS aims to foster innovation and responsible technology use in the insurance sector. It acknowledges the benefits of AIS and ECDIS, but also highlights potential risks such as reinforcing systemic biases, leading to unfair or discriminatory outcomes. 💡 Definitions and Scope: AIS refers to machine-based systems that perform functions akin to human intelligence, such as reasoning and learning, used in insurance underwriting or pricing. ECDIS includes data used to supplement or proxy traditional underwriting and pricing but excludes specific traditional data sources like MIB Group exchanges, motor vehicle reports, or criminal history searches. 💡 Management and Use: Insurers are expected to develop and manage their use of ECDIS and AIS in a manner that is reasonable and aligns with their business model. 💡 Fairness Principles: Insurers must ensure that ECDIS and AIS do not use or are not based on protected class information, do not result in unfair discrimination, and comply with all applicable laws and regulations. 💡 Data Actuarial Validity: The data used must adhere to generally accepted actuarial practices, demonstrating a significant, rational, and non-discriminatory relationship between the variables used and the risk insured. 💡 Unfair and Unlawful Discrimination: Insurers must establish that their underwriting or pricing guidelines derived from ECDIS and AIS do not result in unfair or unlawful discrimination, including performing comprehensive assessments and regular testing. 💡 Governance and Risk Management: Insurers are required to have a corporate governance framework that provides oversight. This includes board and senior management oversight, formal policies and procedures, documentation, and internal control mechanisms. 💡 Third-Party Vendors: Insurers remain responsible for ensuring that tools, ECDIS, or AIS developed or deployed by third-party vendors comply with all applicable laws and regulations. 💡 Transparency and Disclosure: Insurers must disclose their use of ECDIS and AIS in underwriting and pricing. 📣 Feedback Request: The Department is seeking feedback on the circular letter by March 17, 2024, encouraging stakeholders to contribute to the proposed guidance. #ai #insurance #aigovernance #airiskmanagement Jeffery Recker, Dr. Benjamin Lange, Borhane Blili-Hamelin, PhD, Kenneth Cherrier
-
Shaping responsible AI regulations in insurance industry (NAIC) As regulators like the NAIC accelerate efforts to shape responsible AI standards in insurance, collaboration between technology and governance leaders has never been more critical. The Databricks AI Security Framework (DASF) is helping set the stage for robust, practical AI oversight—offering concrete guidance that can inform NAIC’s landmark Model AI Law and similar regulatory initiatives worldwide. DASF transforms AI governance from a checklist into a foundation for trust, fairness, and transparency across the insurance lifecycle. By formalizing security and auditing controls—and mapping them to global standards—DASF ensures insurers can demonstrate compliance, explainability, and real-time accountability to both regulators and policyholders. This means automated bias detection, always-on monitoring, and detailed audit trails are built in from day one, not bolted on after the fact. Databricks’ partnership with Deloitte shows how these frameworks translate into board-level responsibility, future-ready oversight, and continuous risk management—all deeply integrated in platforms insurers already use. As industry standards evolve, DASF’s proactive approach makes it easier for insurers not just to keep up, but to lead—turning governance into a competitive advantage. Read more: https://lnkd.in/gKMYmpDa Marcela Granados Lavoie Erin Butler Karen Puder Anindita Mahapatra Kimberly Hatton Courtney Parry David Zalk Shivakumar Govindaraju