Data Privacy Issues With AI

Explore top LinkedIn content from expert professionals.

  • View profile for Jon Suarez-Davis (jsd)

    Chief Strategy Officer @ Transparent Partners | Investor | Advisor | Digital Transformation Leader | Ex: Salesforce, Krux, Kellogg’s

    17,826 followers

    Google's cookies announcement isn't the week's big news; Oracle's $115 million privacy settlement is. 👇🏼 This week's most important news headline is: "Oracle's $115 million privacy settlement could change industry data collection methods." Every marketer and media leader should understand the allegations in the complaint and execute a review of their data strategy, policies, processes, and protocols, especially as they pertain to third-party data. While we've been talking and fretting about cookie deprecation for four years, we've missed the plot on data permission and usage. It's time to get our priorities straight. Article in the comments section and Industry reaction from legal and data experts below. Jason Barnes, partner at the Simmons Hanly Conroy law firm: "This case is groundbreaking. The allegations in the complaint were that Oracle was building detailed dossiers about consumers with whom it had no first-party relationship. Rather than face a jury, Oracle agreed to a significant monetary settlement and also announced it was getting out of the business," Barnes said. "The big takeaway is that surveillance tech companies that lack a first-party relationship with consumers have a significant problem: no American has actually consented to having their personal information surveilled everywhere they go by a company they've never heard of, packaged into a commoditized dossier, and then monetized and sold without their knowledge." Debbie Reynolds, Founder, Chief Executive Officer, and Chief Data Privacy Officer at Debbie Reynolds Consulting, LLC: "Oracle's privacy case settlement is a significant precedent and highlights that privacy risks are now recognized as business risks, with reduced profits, increased regulatory pressure, and higher consumer expectations impacting organizations' bottom lines," Reynolds said. "One of the most important features of this settlement is Oracle's agreement to stop collecting user-generated information from external URLs and online forms, which is a significant concession in how they do business. Other businesses should take note." #marketing #data #media Ketch super{set}

  • View profile for Dr. Barry Scannell
    Dr. Barry Scannell Dr. Barry Scannell is an Influencer

    AI Law & Policy | Partner in Leading Irish Law Firm William Fry | Member of Irish Government’s Artificial Intelligence Advisory Council | PhD in AI & Copyright | LinkedIn Top Voice in AI | Global Top 200 AI Leaders 2025

    56,652 followers

    By next year we will be producing as much data every 15 minutes as all of human civilisation did up to the year 2003. Data might be the new oil, but it’s unrefined. AI companies are the new oil refineries. Many companies are quietly changing their Terms and Privacy Policies to allow them use this data for machine learning, and the FTC weighed in on this in a blog post last week. This suggests that organisations reviewing their policies and documentation when it comes to AI and data protection in particular, and more broadly - T&Cs and contracts, need to be mindful about how AI is addressed. In their recent blog on the subject, the FTC says: “It may be unfair or deceptive for a company to adopt more permissive data practices—for example, to start sharing consumers’ data with third parties or using that data for AI training—and to only inform consumers of this change through a surreptitious, retroactive amendment to its terms of service or privacy policy.” The temptation for companies to unilaterally amend their privacy policies for broader data utilisation is palpable, driven by the dual forces of business incentive and technological evolution. However, such surreptitious alterations, aimed at circumventing user backlash, tread dangerously close to legal and ethical boundaries. We have already seen major companies fall foul of consumer backlash when they attempted to change their terms along these lines. Historically, the FTC in the US has taken a firm stance against what they deem deceptive practices. Cases like Gateway Learning Corporation and a notable genetic testing company underscore the legal repercussions that await businesses reneging on their privacy commitments. These precedents serve as a stark reminder of the legal imperatives that bind companies to their original user agreements. The EU context is also worth considering. The GDPR's implications for AI and technology companies are significant, particularly in its requirements for transparent data processing, the necessity of informed consent, and the rights of data subjects to object to data processing. For companies, this means navigating a labyrinth of legal obligations that mandate not only the protection of user data but also ensure that any changes to privacy policies are communicated clearly. The intersection of GDPR with the FTC's stance on privacy policy amendments seems to highlight a consensus on the importance of data protection and the rights of consumers in the digital marketplace. This synergy between the U.S. and EU approach creates a formidable legal landscape that AI companies must navigate with caution and respect for user privacy. The path forward for AI companies is clear: transparency is a key element in AI Governance upon which AI and data policies are built. It is arguably the most important element in the AI Act, and it is emerging as a key component in global legislation as jurisdications develop their own AI regulations.

  • View profile for Deepak Bhardwaj

    Agentic AI Champion | 40K+ Readers | Simplifying GenAI, Agentic AI and MLOps Through Clear, Actionable Insights

    45,098 followers

    If You Can't Trust Your Data, You Can't Trust Your Decisions. 𝗕𝗮𝗱 𝗱𝗮𝘁𝗮 𝗶𝘀 𝗲𝘃𝗲𝗿𝘆𝘄𝗵𝗲𝗿𝗲—𝗮𝗻𝗱 𝗶𝘁'𝘀 𝗰𝗼𝘀𝘁𝗹𝘆. Yet, many businesses don't realise the damage until too late. 🔴 𝗙𝗹𝗮𝘄𝗲𝗱 𝗳𝗶𝗻𝗮𝗻𝗰𝗶𝗮𝗹 𝗿𝗲𝗽𝗼𝗿𝘁𝘀? Expect dire forecasts and wasted budgets. 🔴 𝗗𝘂𝗽𝗹𝗶𝗰𝗮𝘁𝗲 𝗰𝘂𝘀𝘁𝗼𝗺𝗲𝗿 𝗿𝗲𝗰𝗼𝗿𝗱𝘀? Say goodbye to personalisation and marketing ROI. 🔴 𝗜𝗻𝗰𝗼𝗺𝗽𝗹𝗲𝘁𝗲 𝘀𝘂𝗽𝗽𝗹𝘆 𝗰𝗵𝗮𝗶𝗻 𝗱𝗮𝘁𝗮? Prepare for delays, inefficiencies, and lost revenue. 𝘗𝘰𝘰𝘳 𝘥𝘢𝘵𝘢 𝘲𝘶𝘢𝘭𝘪𝘵𝘺 𝘪𝘴𝘯'𝘵 𝘫𝘶𝘴𝘵 𝘢𝘯 𝘐𝘛 𝘪𝘴𝘴𝘶𝘦—𝘪𝘵'𝘴 𝘢 𝘣𝘶𝘴𝘪𝘯𝘦𝘴𝘴 𝘱𝘳𝘰𝘣𝘭𝘦𝘮. ❯ 𝑻𝒉𝒆 𝑺𝒊𝒙 𝑫𝒊𝒎𝒆𝒏𝒔𝒊𝒐𝒏𝒔 𝒐𝒇 𝑫𝒂𝒕𝒂 𝑸𝒖𝒂𝒍𝒊𝒕𝒚 To drive real impact, businesses must ensure their data is: ✓ 𝗔𝗰𝗰𝘂𝗿𝗮𝘁𝗲 – Reflects reality to prevent bad decisions. ✓ 𝗖𝗼𝗺𝗽𝗹𝗲𝘁𝗲 – No missing values that disrupt operations. ✓ 𝗖𝗼𝗻𝘀𝗶𝘀𝘁𝗲𝗻𝘁 – Uniform across systems for reliable insights. ✓ 𝗧𝗶𝗺𝗲𝗹𝘆 – Up to date when you need it most. ✓ 𝗩𝗮𝗹𝗶𝗱 – Follows required formats, reducing compliance risks. ✓ 𝗨𝗻𝗶𝗾𝘂𝗲 – No duplicates or redundant records that waste resources. ❯ 𝑯𝒐𝒘 𝒕𝒐 𝑻𝒖𝒓𝒏 𝑫𝒂𝒕𝒂 𝑸𝒖𝒂𝒍𝒊𝒕𝒚 𝒊𝒏𝒕𝒐 𝒂 𝑪𝒐𝒎𝒑𝒆𝒕𝒊𝒕𝒊𝒗𝒆 𝑨𝒅𝒗𝒂𝒏𝒕𝒂𝒈𝒆 Rather than fixing insufficient data after the fact, organisations must 𝗽𝗿𝗲𝘃𝗲𝗻𝘁 it: ✓ 𝗠𝗮𝗸𝗲 𝗘𝘃𝗲𝗿𝘆 𝗧𝗲𝗮𝗺 𝗔𝗰𝗰𝗼𝘂𝗻𝘁𝗮𝗯𝗹𝗲 – Data quality isn't just IT's job. ✓ 𝗔𝘂𝘁𝗼𝗺𝗮𝘁𝗲 𝗚𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 – Proactive monitoring and correction reduce costly errors. ✓ 𝗣𝗿𝗶𝗼𝗿𝗶𝘁𝗶𝘀𝗲 𝗗𝗮𝘁𝗮 𝗢𝗯𝘀𝗲𝗿𝘃𝗮𝗯𝗶𝗹𝗶𝘁𝘆 – Identify issues before they impact operations. ✓ 𝗧𝗶𝗲 𝗗𝗮𝘁𝗮 𝘁𝗼 𝗕𝘂𝘀𝗶𝗻𝗲𝘀𝘀 𝗢𝘂𝘁𝗰𝗼𝗺𝗲𝘀 – Measure the impact on revenue, cost, and risk. ✓ 𝗘𝗺𝗯𝗲𝗱 𝗮 𝗖𝘂𝗹𝘁𝘂𝗿𝗲 𝗼𝗳 𝗗𝗮𝘁𝗮 𝗘𝘅𝗰𝗲𝗹𝗹𝗲𝗻𝗰𝗲 – Treat quality as a mindset, not a project. ❯ 𝑯𝒐𝒘 𝑫𝒐 𝒀𝒐𝒖 𝑴𝒆𝒂𝒔𝒖𝒓𝒆 𝑺𝒖𝒄𝒄𝒆𝒔𝒔? The true test of data quality lies in outcomes: ✓ 𝗙𝗲𝘄𝗲𝗿 𝗲𝗿𝗿𝗼𝗿𝘀 → Higher operational efficiency ✓ 𝗙𝗮𝘀𝘁𝗲𝗿 𝗱𝗲𝗰𝗶𝘀𝗶𝗼𝗻-𝗺𝗮𝗸𝗶𝗻𝗴 → Reduced delays and disruptions ✓ 𝗟𝗼𝘄𝗲𝗿 𝗰𝗼𝘀𝘁𝘀 → Savings from automated data quality checks ✓ 𝗛𝗮𝗽𝗽𝗶𝗲𝗿 𝗰𝘂𝘀𝘁𝗼𝗺𝗲𝗿𝘀 → Higher CSAT & NPS scores ✓ 𝗦𝘁𝗿𝗼𝗻𝗴𝗲𝗿 𝗰𝗼𝗺𝗽𝗹𝗶𝗮𝗻𝗰𝗲 → Lower regulatory risks 𝗤𝘂𝗮𝗹𝗶𝘁𝘆 𝗱𝗮𝘁𝗮 𝗱𝗿𝗶𝘃𝗲𝘀 𝗯𝗲𝘁𝘁𝗲𝗿 𝗱𝗲𝗰𝗶𝘀𝗶𝗼𝗻𝘀. 𝗣𝗼𝗼𝗿 𝗱𝗮𝘁𝗮 𝗱𝗲𝘀𝘁𝗿𝗼𝘆𝘀 𝘁𝗵𝗲𝗺.

  • View profile for Shelley Zalis
    Shelley Zalis Shelley Zalis is an Influencer
    327,641 followers

    AI is only as good as the data we feed it. When it takes 30 prompts to get AI to picture a scientist as a woman, we have a problem. Bias in, bias out. If we want technology to reflect our world, we must train AI to be inclusive—amplifying the voices, faces, and ideas of women and people of color. It’s about building a future that works for everyone. ➡️ AI misidentified darker-skinned women up to 34.7% of the time, compared to a 0.8% error rate for lighter-skinned men. ➡️ In 2020, only 14% of authors of AI-related research papers were women ➡️ AI-driven hiring platforms can reflect and perpetuate anti-Black biases ➡️ As of 2018, women comprised only 22% of AI professionals worldwide The future of AI is being built now—and if we don’t course-correct, we risk reinforcing the same biases. It’s time to ensure AI works for everyone. 👉 Jay Flores

  • View profile for Peter Slattery, PhD
    Peter Slattery, PhD Peter Slattery, PhD is an Influencer

    MIT AI Risk Initiative | MIT FutureTech

    64,310 followers

    "This report developed by UNESCO and in collaboration with the Women for Ethical AI (W4EAI) platform, is based on and inspired by the gender chapter of UNESCO’s Recommendation on the Ethics of Artificial Intelligence. This concrete commitment, adopted by 194 Member States, is the first and only recommendation to incorporate provisions to advance gender equality within the AI ecosystem. The primary motivation for this study lies in the realization that, despite progress in technology and AI, women remain significantly underrepresented in its development and leadership, particularly in the field of AI. For instance, currently, women reportedly make up only 29% of researchers in the field of science and development (R&D),1 while this drops to 12% in specific AI research positions.2 Additionally, only 16% of the faculty in universities conducting AI research are women, reflecting a significant lack of diversity in academic and research spaces.3 Moreover, only 30% of professionals in the AI sector are women,4 and the gender gap increases further in leadership roles, with only 18% of in C-Suite positions at AI startups being held by women.5 Another crucial finding of the study is the lack of inclusion of gender perspectives in regulatory frameworks and AI-related policies. Of the 138 countries assessed by the Global Index for Responsible AI, only 24 have frameworks that mention gender aspects, and of these, only 18 make any significant reference to gender issues in relation to AI. Even in these cases, mentions of gender equality are often superficial and do not include concrete plans or resources to address existing inequalities. The study also reveals a concerning lack of genderdisaggregated data in the fields of technology and AI, which hinders accurate measurement of progress and persistent inequalities. It highlights that in many countries, statistics on female participation are based on general STEM or ICT data, which may mask broader disparities in specific fields like AI. For example, there is a reported 44% gender gap in software development roles,6 in contrast to a 15% gap in general ICT professions.7 Furthermore, the report identifies significant risks for women due to bias in, and misuse of, AI systems. Recruitment algorithms, for instance, have shown a tendency to favor male candidates. Additionally, voice and facial recognition systems perform poorly when dealing with female voices and faces, increasing the risk of exclusion and discrimination in accessing services and technologies. Women are also disproportionately likely to be the victims of AI-enabled online harassment. The document also highlights the intersectionality of these issues, pointing out that women with additional marginalized identities (such as race, sexual orientation, socioeconomic status, or disability) face even greater barriers to accessing and participating in the AI field."

  • View profile for Vin Vashishta
    Vin Vashishta Vin Vashishta is an Influencer

    AI Strategist | Monetizing Data & AI For The Global 2K Since 2012 | 3X Founder | Best-Selling Author

    204,366 followers

    Data privacy and ethics must be a part of data strategies to set up for AI. Alignment and transparency are the most effective solutions. Both must be part of product design from day 1. Myths: Customers won’t share data if we’re transparent about how we gather it, and aligning with customer intent means less revenue. Instacart customers search for milk and see an ad for milk. Ads are more effective when they are closer to a customer’s intent to buy. Instacart charges more, so the app isn’t flooded with ads. SAP added a data gathering opt-in clause to its contracts. Over 25,000 customers opted in. The anonymized data trained models that improved the platform’s features. Customers benefit, and SAP attracts new customers with AI-supported features. I’ve seen the benefits first-hand working on data and AI products. I use a recruiting app project as an example in my courses. We gathered data about the resumes recruiters selected for phone interviews and those they rejected. Rerunning the matching after 5 select/reject examples made immediate improvements to the candidate ranking results. They asked for more transparency into the terms used for matching, and we showed them everything. We introduced the ability to reject terms or add their own. The 2nd pass matches improved dramatically. We got training data to make the models better out of the box, and they were able to find high-quality candidates faster. Alignment and transparency are core tenets of data strategy and are the foundations of an ethical AI strategy. #DataStrategy #AIStrategy #DataScience #Ethics #DataEngineering

  • View profile for Adam Brown, MD MBA
    Adam Brown, MD MBA Adam Brown, MD MBA is an Influencer

    Healthcare Industry Expert and Strategist I Founder @ABIG Health I Physician I Business School Professor I Healthcare Start-up Advisor

    47,427 followers

    Was compensation for breaching and violating a patient's privacy worth $10? This week, an unsettling development unfolded as BetterHelp, a widely-used teletherapy platform now owned by Teladoc Health, settled with the FTC for $7.8 million over serious breaches of user privacy. According to the Federal Trade Commission, BetterHelp sold highly sensitive user data—including IP and email addresses and even answers to mental health questions—to social media giants like Facebook and Snap Inc. The repercussions of such actions are profound. Clients of BetterHelp received notifications about the settlement, only to learn that the financial compensation offered is: ~$10. This token amount seems a slap in the face, trivializing the potential damage to those affected. Here's the problem: The bond between therapist/clinician and patient is sacrosanct, grounded in the assurance of confidentiality. When this trust is compromised, especially in such a blatant manner, it not only damages individual therapist-client relationships but could also deter people from seeking essential mental health services online. The potential for harm here is incalculable. It undermines individual trust in telehealth services and casts a long shadow over the promise and potential of leveraging technology in healthcare and mental healthcare. While it’s unclear whether these actions constitute a HIPAA violation—as it’s uncertain if the shared information was directly linked to identifiable patient health records—the breach of confidentiality remains a critical issue. More concerning is that the FTC reported that Betterhelp misrepresented that they were HIPAA compliant. As we continue to embrace telehealth and innovations in healthcare, it is imperative that we prioritize strong, enforceable protections for patient data. Technology can greatly enhance healthcare delivery, but it must not do so at the cost of patient safety and privacy. #telehealth #digitalhealth #privacy #FTC #BetterHelp #healthcare #HIPAA Genevieve Friedman UNC Kenan-Flagler Business School MBA@UNC ABIG Health

  • View profile for David Lancefield
    David Lancefield David Lancefield is an Influencer

    Strategy advisor & Exec Coach | Helping CEOs/CXOs perform at their best (transitions, first 100 days, decisions). | Founder, Strategy Shift I HBR Contributor I LinkedIn Top Voice 24/25 I LBS Guest Lecturer I Podcast Host

    23,473 followers

    Your data knows more about you than you think and it's not just about privacy We all know our phones track us. But few of us realise what that data really says—and what companies can do with it. In this episode of Lancefield on the Line, I speak with Professor Sandra Matz, psychologist, data scientist and author of Mindmasters, about the surprising power and peril of our digital footprints. This is one of the most stimulating and disturbing conversations I’ve had on the show. Here are the top five takeaways: 1. Digital footprints go beyond what you post; they include everything from your GPS data to how often your phone runs out of battery. 2. Machine learning can infer your personality, values, and mental health from these "behavioural residues", often more accurately than people close to you. 3. There are significant benefits, such as early detection of emotional distress or AI-powered mental health support when used ethically. 4. Federated learning is a game-changer, allowing AI to help you without harvesting your data. Trust becomes built-in, not blind. 5. The Evil Steve Test is essential for leaders; if your data practices were used by someone with bad intentions, would they still feel ethical? This conversation will make you think differently about your phone, your organisation, and yourself. Tune in and explore how to lead and live with greater ethical awareness in the age of data.

  • View profile for Holly Joint
    Holly Joint Holly Joint is an Influencer

    COO | Board Member | Advisor | Speaker | Coach | Executive Search | Women4Tech

    19,680 followers

    I wrote a post a year ago about whether AI could help more women reach leadership positions. It was a popular post. While Gen AI models have made improvements over the last year in addressing bias, I have become more cynical, why? I want to believe AI could help shift the gender balance in leadership, but let’s be honest: the data that trains these systems reflects our uneven past. The full humanity and history of women are not written into our culture, history is riddled with biases, and AI models only superficially wipe them away. Here are some of my concerns with the opportunities I talked about previously: 1) Unravelling old stereotypes or reinforcing them? AI can create interactive tools aimed at challenging tired gender roles, potentially inspiring more women to pursue unconventional paths. But if the training data is skewed, showing more male success stories, these tools will miss the mark. We need a concerted effort to diversify the data sets and amplify women’s stories. 2) Rooting out workplace bias, but who’s checking the system? Algorithms can scan hiring and promotion patterns, highlighting subtle biases. Yet if the data itself is slanted, AI might end up codifying existing inequalities. We need humans, especially women, in decision-making positions, to question the conclusions AI spits out and ensure those insights serve everyone. 3) Empowering personal development, or stating the obvious? It’s great that AI can point out confidence gaps and match women with mentors. But there’s a nagging feeling that if the underlying patterns are based on fewer women rising to the top, the recommendations will be limited. We must keep scrutinising how AI is trained, to ensure women get genuinely helpful advice rather than being boxed into low-ambition career paths. In the end, it’s still people who need to champion policy changes, demand transparency, and hold leaders accountable. None of this means we should abandon AI, we just shouldn’t treat it as a quick fix. It requires humans to address the problems using AI as a tool to support them. To make it all work, we have to stay laser-focused on ethics, data privacy and bias-free design. There’s little point in turning to AI if it just recreates the same barriers we’re trying to break. Talk is cheap, action is hard. But what do you think? Are we asking too much from AI, or do we risk failing women if we don’t harness its potential? Image: leaders in a boardroom by Midjourney #womenleaders #leadership #bias #AI Enjoyed this? ♻️ Share it and follow Holly Joint for insights on strategy, leadership, culture, and women in a tech-driven future. 🙌🏻

  • View profile for Michael Lin

    Founder & CEO of Wonders.ai | AI, AR & VR Expert | Predictive Tech Pioneer | Board Director at Cheer Digiart | Anime Enthusiast | Passionate Innovator

    16,346 followers

    The recent $95 million settlement by Apple over allegations of Siri-enabled privacy breaches underscores a pivotal moment for tech professionals navigating the delicate balance between innovation and user trust. As voice assistants become integral to our daily lives, this case illuminates the risks of unintentional data collection and the potential fallout—financial, reputational, and ethical—when consumer privacy is perceived as compromised. For engineers, developers, and business leaders, this serves as a critical reminder: robust privacy safeguards and transparent practices aren’t optional—they’re fundamental to maintaining user loyalty in an increasingly data-sensitive world. This moment invites the tech community to reimagine AI solutions that are not only cutting-edge but also deeply rooted in trust and accountability. How can we, as innovators, ensure that technology enhances lives while respecting the privacy and trust of its users? #TechNews #Innovation #Privacy #Apple

Explore categories