How insurers monitor vendor AI practices

Explore top LinkedIn content from expert professionals.

Summary

Insurers monitor vendor ai practices by assessing how third-party ai systems and tools are used, ensuring compliance with regulations, reducing risks, and maintaining responsibility for outcomes. This process involves ongoing oversight of vendor-provided ai solutions to safeguard data, ensure transparency, and address issues like bias and accountability.

  • Request clear documentation: Always ask vendors for detailed information about their ai systems, including how they work, what data they use, and how decisions are made.
  • Conduct regular audits: Set up processes to routinely check and review vendor ai outputs, monitoring for accuracy, fairness, and proper data handling.
  • Build contractual safeguards: Make sure your contracts require vendors to disclose incidents, cooperate with risk assessments, and provide the right to audit or test their ai solutions as needed.
Summarized by AI based on LinkedIn member posts
  • View profile for Omer Tene

    Partner, Goodwin

    14,911 followers

    AI vendor management. One of the most pressing challenges companies face these days is vetting and contracting with AI vendors. This comes up in two contexts: (a) vetting solutions from AI vendors that your company considers adopting, and (b) vetting solutions from AI vendors that your vendors have adopted. Where do you start? *** The second question comes up a lot in a GDPR or state privacy law context. Your vendors (processors / service providers) are required by law and/or contract to notify you when they start using a new subprocessor. Consider that companies with hundreds of vendors now get thousands such notices. “We started using ChatGPT”. “We’re now using GitHub Copilot”. What’s a GC or CPO to do with such notices? In many cases they haven’t even approved the use of the same tools internally themselves.... *** When the EU AI Act comes into force, under Article 10(6a) of the Parliament’s draft, the obligations of an AI provider could flow down to deployers of AI: “Where the provider cannot comply with the obligations laid down in this Article because that provider does not have access to the data and the data is held exclusively by the deployer, the deployer may, on the basis of a contract, be made responsible for any infringement of this Article.” All the more reason for companies to *closely* vet the solutions they’re implementing. *** And the new draft CCPA regs on risk assessments, require “A service provider or contractor ... [to] cooperate with the business in its conduct of a risk assessment pursuant to Article 10...” The regs focus on such risk assessments for automated decision making as well as the use of data for training AI.  https://lnkd.in/ejk4fYgQ *** There are a few useful checklists out there. The attached was created by Amber Nicole Ezzell from FPF based on convos with more than 30 experts. https://lnkd.in/e8E_t64W This one from PwC is usefully role based: https://lnkd.in/eZwVAZmB And this one from CNIL provides insight into a regulator’s approach: https://lnkd.in/ewgBrXKj *** I suggest going back to basics: what are the risks to the company’s PII and IP? Is there potential for bias or discrimination? Are there accountability mechanisms, including audit and log trails? Can you ensure explainability?   *** From a contractual perspective: look closely at the definitions of customer data, usage data and confidential information. *** I also recommend consulting with outside counsel. From our vantage point, we see how many companies across industry sectors – both vendors and deployers - cope with and respond to these complex challenges.  

  • View profile for Patrick Sullivan

    VP of Strategy and Innovation at A-LIGN | TEDx Speaker | Forbes Technology Council | AI Ethicist | ISO/IEC JTC1/SC42 Member

    10,233 followers

    Most AI systems in use today are built on third-party tools. That includes models, datasets, and full platforms. But using a vendor’s product DOES NOT remove your responsibility. Under most AI regulations, the deployer (likely YOU) is accountable for how the system performs and whether it causes harm. #ISO42001 helps organizations manage that risk. It provides a structure for assigning roles, reviewing supplier practices, validating documentation, and managing risk across the lifecycle of the AI system. The standard requires you to: 🔸Define who is responsible for each part of the system (Annex A.10.2) 🔸Put a process in place to evaluate and monitor suppliers (Annex A.10.3) 🔸Confirm that technical documentation is available and complete (Annex A.6.2.7) 🔸Reflect supplier-related risks in your own planning and contracts (Clause 6.1.3 and Annex A.10.4) Clause 8.1 is clear. You must control how external systems are used inside your organization. This means you cannot treat vendor models as a black box. You are expected to evaluate them and take action if there are risks. The Cloud Security Alliance offers helpful questions to ask vendors, including whether they align with ISO42001 and whether they assess their own supply chain. If your organization is deploying AI, you should be treating suppliers as part of your governance process. Not doing so creates legal and operational exposure. A-LIGN #TheBusinessOfCompliance #ComplianceAlignedtoYou

  • View profile for Alex Pezold

    Co-Founder | CEO (No recruiters, please!)

    4,317 followers

    How Do We Audit AI Outputs and Ensure Accuracy? In insurance, intelligent automation isn’t enough. You need explainability, traceability, and operational oversight—especially when decisions carry real risk. At Agentech, we’ve embedded auditability into the core of our platform so Claims, IT, and Compliance leaders can inspect what they expect. Linked Decision Logic Every AI output includes a direct link to the policy clause, regulation, or business rule that informed the recommendation. No black boxes. Tamper-Proof Logs All decision activities are captured in tamper-evident logs—ready for internal compliance teams, regulators, or external auditors. Benchmark-Driven Validation Before deployment, agents are tested against real-world claim scenarios and validated against performance benchmarks set by the customer. Escalation When It Matters If confidence in an output drops or data is ambiguous, the task is automatically flagged and routed for human review—keeping critical decisions in the right hands. Governed Learning Framework Retraining isn’t reactive. It’s governed by structured reviews, not just system usage. That means improvements stay intentional and aligned with your goals. You don’t just deploy our AI. You govern it, trace it, and trust it. #AIinClaims #InsuranceAnalytics #Auditability #AICompliance #InsurtechLeaders #ClaimsExecutives #DigitalClaims

  • View profile for AD E.

    GRC Visionary | Cybersecurity & Data Privacy | AI Governance | Pioneering AI-Driven Risk Management and Compliance Excellence

    10,131 followers

    There’s several AI-powered tools specifically designed to streamline compliance tracking, risk assessments, and third-party risk management (TPRM). These tools typically use AI and machine learning to automate data analysis, monitor for risks, and support regulatory requirements. Compliance Tracking Tools 1. LogicGate Risk Cloud • Offers automated compliance workflows. • Tracks and maps controls to frameworks like GDPR, HIPAA, SOC 2. • AI helps identify gaps and automate evidence collection. 2. Hyperproof • Centralized compliance operations platform. • Automates control monitoring and integrates with tools like Jira and Slack. • AI features to flag anomalies and track continuous compliance. 3. OneTrust • Popular for privacy compliance (GDPR, CCPA). • Uses AI to manage data subject requests and maintain compliance posture. • Automates data mapping and impact assessments. 4. ComplyAdvantage • Specializes in AML/KYC and sanctions screening. • AI detects compliance risks in transactions and customer profiles. Risk Assessment Tools 1. ServiceNow GRC • Integrates AI-driven risk scoring and predictive analytics. • Helps conduct enterprise risk assessments and track mitigation activities. 2. RSA Archer • Offers advanced risk quantification. • Uses AI to predict risks and prioritize remediation. 3. MetricStream • Enables risk identification, assessment, and mitigation workflows. • AI for real-time risk indicators and trend analysis. 4. IBM OpenPages with Watson • Leverages IBM Watson AI to automate risk identification and control testing. • Strong in regulatory compliance and internal audits. Third-Party Risk Management (TPRM) Tools 1. SecurityScorecard • Uses AI to continuously monitor cybersecurity posture of vendors. • Provides letter-grade risk scores for third parties. 2. BitSight • Offers external risk ratings and threat detection. • AI analyzes global signals to monitor vendor risk in real time. 3. Aravo • Automates third-party risk workflows, including onboarding, due diligence, and monitoring. • AI flags high-risk entities based on configurable parameters. 4. Prevalent • Delivers vendor assessments, continuous monitoring, and threat intelligence. • AI helps streamline risk classification and remediation recommendations. Honorable Mentions (Cross-Functionality) • Drata – Automated SOC 2, ISO 27001, HIPAA compliance. • Vanta – Simplifies audits and evidence collection with real-time monitoring. • AuditBoard – Combines audit, risk, and compliance management with analytics and AI insights. #GRC #Compliance #RiskManagement #ThirdPartyRisk #AuditTech #RegTech #Governance #AIGRC #AICompliance #AITools #Automation #TechForGood #CybersecurityAI #InfoSec #CyberCompliance #PrivacyTech #SecurityRisk #DigitalGovernance #CloudCompliance #Innovation #FutureOfWork #EnterpriseTech #DataDriven

  • View profile for Odia Kagan

    CDPO, CIPP/E/US, CIPM, FIP, GDPRP, PLS, Partner, Chair of Data Privacy Compliance and International Privacy at Fox Rothschild LLP

    24,183 followers

    The AI onboarding struggle is real. New memo by the Office of the Management of the Budget tells government agencies what to add to their diligence process and AI procurement contracts. What are we working on with clients, as discussed with Melinda R. Lewis at the Fox Rothschild Federal Government Contracts Symposium. 🔹️New OMB Memo adds practical steps for implementation of the March 2024 memo (summary here: https://lnkd.in/e4BeS8bD) 🔹️All agencies of the Federal Government must ensure their AI acquisitions comply with the risk management requirements identified in the March memo and this memo, while also continuing to prioritize privacy, security, data ownership, and interoperability, including the requirements of this memo + NIST AI RMF. 🔹️ Agencies must cease use of AI systems or services that impact rights or safety in cases where required risk management practices cannot be sufficiently implemented as determined by the agency. To do by December 2024: At the diligence stage agencies must: 🔹️ Require sufficient transparency from the vendor (incorporate into the RFP and the contract) commensurate with the risk and impact of the use case for which the AI system or service will be used. 🔹️Avoid Bias, Discrimination and Harmful Outcomes - address bias and require the vendor to identify and provide mitigation strategies. In the contract: 🔹️Require vendors to provide information about: Performance metrics, training data, programmatic evaluations of the AI system or service, including the methodology, design, data, and results of evaluation; testing and validation data; how input data is used, transformed, and retained by the AI; AI model(s) integrated and data protection metrics or assurance indicators 🔹️ Build in monitoring of the system and ensure, contractually, that vendor is resposible for this this, for example by requiring vendors to provide sufficient access and time to conduct any required testing, or to regularly provide the results of testing. 🔹️ Ensure the ability, throughout the entire lifecycle of the contract, to update risk mitigation options and prioritize performance improvement of the AI system or service including requiring vendors to: regularly monitor the system's performance, meet performance standards before deploying a new version; participate in program evaluations, document tools, techniques, coding methods, and testing results, and have a process for identifying; disclosing to agencies serious AI incidents and malfunctions of the acquired AI system, or service within 72 hours, and help with notice and appeal procedures. 🔹️Specific requirements are included for Generative AI and Biometrics #dataprivacy #dataprotection #privacyFOMO #AIFOMO Pic by ChatGPT https://shorturl.at/kOKAv

  • View profile for Shea Brown
    Shea Brown Shea Brown is an Influencer

    AI & Algorithm Auditing | Founder & CEO, BABL AI Inc. | ForHumanity Fellow & Certified Auditor (FHCA)

    21,987 followers

    New York State DFS is looking for comments on a proposed circular letter that outlines proper risk management for AI systems and external data used in insurance underwriting. The "Proposed Insurance Circular Letter" addresses the use of Artificial Intelligence Systems (AIS) and External Consumer Data and Information Sources (ECDIS) in insurance underwriting and pricing. The key points include: 💡 Purpose and Background: The DFS aims to foster innovation and responsible technology use in the insurance sector. It acknowledges the benefits of AIS and ECDIS, but also highlights potential risks such as reinforcing systemic biases, leading to unfair or discriminatory outcomes. 💡 Definitions and Scope: AIS refers to machine-based systems that perform functions akin to human intelligence, such as reasoning and learning, used in insurance underwriting or pricing. ECDIS includes data used to supplement or proxy traditional underwriting and pricing but excludes specific traditional data sources like MIB Group exchanges, motor vehicle reports, or criminal history searches. 💡 Management and Use: Insurers are expected to develop and manage their use of ECDIS and AIS in a manner that is reasonable and aligns with their business model. 💡 Fairness Principles: Insurers must ensure that ECDIS and AIS do not use or are not based on protected class information, do not result in unfair discrimination, and comply with all applicable laws and regulations. 💡 Data Actuarial Validity: The data used must adhere to generally accepted actuarial practices, demonstrating a significant, rational, and non-discriminatory relationship between the variables used and the risk insured. 💡 Unfair and Unlawful Discrimination: Insurers must establish that their underwriting or pricing guidelines derived from ECDIS and AIS do not result in unfair or unlawful discrimination, including performing comprehensive assessments and regular testing. 💡 Governance and Risk Management: Insurers are required to have a corporate governance framework that provides oversight. This includes board and senior management oversight, formal policies and procedures, documentation, and internal control mechanisms. 💡 Third-Party Vendors: Insurers remain responsible for ensuring that tools, ECDIS, or AIS developed or deployed by third-party vendors comply with all applicable laws and regulations. 💡 Transparency and Disclosure: Insurers must disclose their use of ECDIS and AIS in underwriting and pricing. 📣 Feedback Request: The Department is seeking feedback on the circular letter by March 17, 2024, encouraging stakeholders to contribute to the proposed guidance. #ai #insurance #aigovernance #airiskmanagement Jeffery Recker, Dr. Benjamin Lange, Borhane Blili-Hamelin, PhD, Kenneth Cherrier

  • View profile for Jodi Daniels

    Practical Privacy Advisor / Fractional Privacy Officer / AI Governance / WSJ Best Selling Author / Keynote Speaker

    19,760 followers

    If your team is asking “Can we use this AI tool?” You need governance.   Especially when AI systems can develop discriminatory bias, give incorrect advice, leak customer data, introduce security flaws, and perpetuate outdated assumptions about users.   AI governance programs and assessments are no longer an optional best practice.   They're on the fast track to becoming mandatory as several AI regulations roll out. Most notably for high-risk AI use. I recommend AI assessments beyond high risk use cases to also capture the privacy, security and ethical risks. Here’s how companies can conduct an AI risk assessment: ✔ Start by building an AI data inventory List every AI tool in use, including hidden ones embedded inside vendor software. Capture data inputs, decisions it makes, who has access, and outputs. ✔ Assess the decision impact Identify where wrong AI decisions could cause harm or discriminate, and review AI systems thoroughly to understand if it involves high-risk.   ✔ Examine company data sources Check whether your training data is current, representative, and free from historical bias. Confirm you have disclosures and permissions for use. ✔ Test for bias and fairness Run scenarios through AI systems with different demographic inputs and look for discrepancies in outcomes. ✔ Document everything Maintain detailed records of the assessment process, findings, and changes you make. Regulations like the EU AI Act and the Colorado AI Act have specific requirements for documenting high-risk AI usage.   ✔ Build monitoring checkpoints Set regular reviews and repeat risk assessments when new products or services are introduced or as models, vendors, business needs, or regulations change. AI oversight isn’t coming someday. It’s here.   Companies that start preparing now will be ready when the new regulations come into force. Read our full blog for more tips and to see how to put this into action 👇

  • View profile for Supro Ghose

    CIO | CISO | Cybersecurity & Risk Leader | Federal & Financial Services | Cloud & AI Security | NIST CSF/RMF | Board Reporting | Digital Transformation | GenAI Governance | Banking & Regulatory Ops

    14,677 followers

    In today’s banking ecosystem, third-party vendor systems are essential as much of the data that drives decision-making comes from external sources. Vendor risk management teams should scrutinize these systems to ensure data accuracy, security, and regulatory compliance. With increasing reliance on AI models, Banks face risks like data bias, which can lead to flawed decision-making. As NYS DFS Superintendent Adrienne A. Harris emphasized recently (see link below) Banks are ultimately responsible for vetting third-party AI systems. Therefore, vendor management processes need to evolve to include critical AI assessments to maintain accountability. https://lnkd.in/dBzA3Hm2 To align with Harris’s guidance, vendor management teams could ask AI vendors the following questions: 1. Are you using AI models in the data that you provide for decision making to our company 2. How do you ensure your AI models are free from bias, and what steps are in place to mitigate bias? 3. Can you provide transparency into the data sources used to train your AI models, and how do you verify data quality? 4. What processes are in place to comply with regulatory guidelines (such as and other regulatory frameworks regarding AI bias? 5. What mechanisms do you have to audit and monitor AI performance for fairness and accuracy? 6. Additional details such as independent model safety testing and review of their overall AI risk management policies.  These questions will help ensure third-party systems meet ethical and regulatory standards, minimizing risks related to AI bias in the banking sector. On Oct16th. NYS/DFS enhanced the NY cybersecurity regulation (23 NYCRR Part 500) by providing guidance to regulated entities in addressing and combating cybersecurity risks arising from artificial intelligence. https://lnkd.in/dRCJtzRi

Explore categories