By next year we will be producing as much data every 15 minutes as all of human civilisation did up to the year 2003. Data might be the new oil, but it’s unrefined. AI companies are the new oil refineries. Many companies are quietly changing their Terms and Privacy Policies to allow them use this data for machine learning, and the FTC weighed in on this in a blog post last week. This suggests that organisations reviewing their policies and documentation when it comes to AI and data protection in particular, and more broadly - T&Cs and contracts, need to be mindful about how AI is addressed. In their recent blog on the subject, the FTC says: “It may be unfair or deceptive for a company to adopt more permissive data practices—for example, to start sharing consumers’ data with third parties or using that data for AI training—and to only inform consumers of this change through a surreptitious, retroactive amendment to its terms of service or privacy policy.” The temptation for companies to unilaterally amend their privacy policies for broader data utilisation is palpable, driven by the dual forces of business incentive and technological evolution. However, such surreptitious alterations, aimed at circumventing user backlash, tread dangerously close to legal and ethical boundaries. We have already seen major companies fall foul of consumer backlash when they attempted to change their terms along these lines. Historically, the FTC in the US has taken a firm stance against what they deem deceptive practices. Cases like Gateway Learning Corporation and a notable genetic testing company underscore the legal repercussions that await businesses reneging on their privacy commitments. These precedents serve as a stark reminder of the legal imperatives that bind companies to their original user agreements. The EU context is also worth considering. The GDPR's implications for AI and technology companies are significant, particularly in its requirements for transparent data processing, the necessity of informed consent, and the rights of data subjects to object to data processing. For companies, this means navigating a labyrinth of legal obligations that mandate not only the protection of user data but also ensure that any changes to privacy policies are communicated clearly. The intersection of GDPR with the FTC's stance on privacy policy amendments seems to highlight a consensus on the importance of data protection and the rights of consumers in the digital marketplace. This synergy between the U.S. and EU approach creates a formidable legal landscape that AI companies must navigate with caution and respect for user privacy. The path forward for AI companies is clear: transparency is a key element in AI Governance upon which AI and data policies are built. It is arguably the most important element in the AI Act, and it is emerging as a key component in global legislation as jurisdications develop their own AI regulations.
Impact of privacy cases on tech trust
Explore top LinkedIn content from expert professionals.
Summary
The impact of privacy cases on tech trust refers to how legal actions and scandals involving companies mishandling user data influence public confidence in technology and digital services. When firms are seen violating privacy commitments, especially by sharing or misusing sensitive information, it can erode trust, affect user choices, and push for stricter data protection laws worldwide.
- Prioritize transparency: Clearly communicate to users about what data is collected and how it will be used, avoiding any hidden changes in privacy policies.
- Strengthen safeguards: Invest in robust privacy protections and give users easy tools to manage data sharing and consent, especially as new technologies like AI evolve.
- Respect user rights: Treat consumer data responsibly and be prepared to respond quickly to concerns, since trust is easily lost and hard to regain in the digital age.
-
-
Was compensation for breaching and violating a patient's privacy worth $10? This week, an unsettling development unfolded as BetterHelp, a widely-used teletherapy platform now owned by Teladoc Health, settled with the FTC for $7.8 million over serious breaches of user privacy. According to the Federal Trade Commission, BetterHelp sold highly sensitive user data—including IP and email addresses and even answers to mental health questions—to social media giants like Facebook and Snap Inc. The repercussions of such actions are profound. Clients of BetterHelp received notifications about the settlement, only to learn that the financial compensation offered is: ~$10. This token amount seems a slap in the face, trivializing the potential damage to those affected. Here's the problem: The bond between therapist/clinician and patient is sacrosanct, grounded in the assurance of confidentiality. When this trust is compromised, especially in such a blatant manner, it not only damages individual therapist-client relationships but could also deter people from seeking essential mental health services online. The potential for harm here is incalculable. It undermines individual trust in telehealth services and casts a long shadow over the promise and potential of leveraging technology in healthcare and mental healthcare. While it’s unclear whether these actions constitute a HIPAA violation—as it’s uncertain if the shared information was directly linked to identifiable patient health records—the breach of confidentiality remains a critical issue. More concerning is that the FTC reported that Betterhelp misrepresented that they were HIPAA compliant. As we continue to embrace telehealth and innovations in healthcare, it is imperative that we prioritize strong, enforceable protections for patient data. Technology can greatly enhance healthcare delivery, but it must not do so at the cost of patient safety and privacy. #telehealth #digitalhealth #privacy #FTC #BetterHelp #healthcare #HIPAA Genevieve Friedman UNC Kenan-Flagler Business School MBA@UNC ABIG Health
-
The recent $95 million settlement by Apple over allegations of Siri-enabled privacy breaches underscores a pivotal moment for tech professionals navigating the delicate balance between innovation and user trust. As voice assistants become integral to our daily lives, this case illuminates the risks of unintentional data collection and the potential fallout—financial, reputational, and ethical—when consumer privacy is perceived as compromised. For engineers, developers, and business leaders, this serves as a critical reminder: robust privacy safeguards and transparent practices aren’t optional—they’re fundamental to maintaining user loyalty in an increasingly data-sensitive world. This moment invites the tech community to reimagine AI solutions that are not only cutting-edge but also deeply rooted in trust and accountability. How can we, as innovators, ensure that technology enhances lives while respecting the privacy and trust of its users? #TechNews #Innovation #Privacy #Apple
-
In an era where privacy is the ultimate luxury, Apple—a company renowned for its strong stance on user privacy—has found itself at the center of a massive controversy. The tech giant has agreed to pay $95 million (₹814 crores) in a lawsuit that accused Siri, its voice assistant, of recording private conversations without user consent and sharing them with third parties. The Allegations The case stemmed from claims that Siri was being inadvertently activated by users, leading to the recording of highly personal conversations. Even more troubling, these recordings were allegedly sent to third-party contractors for evaluation without user knowledge. This scandal was first exposed in 2019 by The Guardian, which reported that Apple’s contractors listened to sensitive discussions, including: - Medical consultations, - Private business meetings, and - Intimate personal exchanges. While Apple denied any wrongdoing, this case highlights a glaring gap between privacy promises and actual practices. The Settlement Here’s what it entails: - Payout to Users: Thousands of affected users will receive compensation of $24 (₹1,700) per device. - Legal Fees: A significant portion of the settlement—up to 30%—will go to the attorneys involved. - Apple’s Stance: The company maintains it did not violate user trust, but settled to avoid prolonged litigation. The Bigger Picture This incident is not just about Apple. It’s a wake-up call for every company operating in the digital age: 1. Transparency is Non-Negotiable: Users have the right to know how their data is collected, stored, and used. 2. Trust is Fragile: Even giants like Apple can face reputational damage if user privacy is compromised. 3. Accountability Must Follow Innovation: Companies can no longer prioritize profits over ethics. For Consumers - Be Informed: Read privacy policies, however tedious they may seem. - Be Proactive: Use device settings to limit data sharing and disable features like voice assistants when not in use. - Advocate for Stricter Regulations: Governments must enforce stringent data protection laws to safeguard user rights. Apple's Future Steps Since the controversy, Apple has taken steps to rebuild user trust, including: - Disabling human grading of Siri recordings, - Allowing users to opt out of sharing their data, and - Strengthening their privacy policies. However, this lawsuit serves as a stark reminder: Even the most trusted brands must remain under constant scrutiny. What’s Next? As users, we need to push for digital ethics and ensure companies treat our data with the respect it deserves. Should stricter penalties be imposed for such violations? Are current privacy laws sufficient in protecting us? Let’s discuss! #DataPrivacy #AppleLawsuit #TechnologyEthics #DigitalSecurity #Siri #ConsumerRights #TransparencyMatters #EthicalTech
-
In the Age of Privacy, Even Giants Stumble: Apple's $95M Siri Lawsuit Settlement Privacy is often called the ultimate luxury of our times, and Apple—a brand synonymous with protecting user data—has found itself entangled in controversy. The tech giant recently agreed to a $95 million settlement (₹814 crores) following allegations that Siri, its voice assistant, recorded private conversations without consent and shared them with third parties. 🚨 The Allegations The lawsuit accused Siri of being inadvertently triggered, leading to sensitive conversations being recorded and evaluated by third-party contractors without user knowledge. These recordings reportedly included: 🩺 Medical consultations 💼 Confidential business discussions ❤️ Intimate personal exchanges This issue came to light in 2019 through investigative reporting by The Guardian, raising serious concerns about the gap between Apple's privacy promises and its actual practices. 💸 The Settlement Here’s what the agreement entails: Compensation for Users: Affected users will receive $24 (₹1,700) per device. Legal Fees: Up to 30% of the settlement will go to attorneys. Apple’s Response: While denying any wrongdoing, Apple stated that settling was a strategic decision to avoid prolonged litigation. 🔍 Lessons for the Industry This isn’t just about Apple; it’s a broader lesson for all companies navigating the digital landscape: Transparency is Non-Negotiable: Users must know how their data is collected and used. Trust is Fragile: Even the most trusted brands can face backlash when privacy is compromised. Ethics Before Profit: Accountability must accompany innovation—companies can’t prioritize growth over user rights. 🔑 For Consumers What can we do to protect ourselves? Stay Informed: Read privacy policies, even if they seem tedious. Take Control: Review device settings to manage data-sharing and disable features like voice assistants when not needed. Demand Action: Advocate for stronger data protection laws and hold companies accountable. 🛡️ Apple’s Next Steps To address these concerns, Apple has implemented changes, including: Ending human grading of Siri recordings. Introducing opt-out options for data sharing. Strengthening its privacy policies. 🚀 The Road Ahead This lawsuit serves as a critical reminder: even trusted brands must remain under scrutiny. But it also raises important questions: Should penalties for privacy violations be stricter? Are current data protection laws sufficient? Let’s continue this important conversation and push for a future where our data is treated with the respect it deserves. #DataPrivacy #Apple #TechnologyEthics #DigitalSecurity #Siri #ConsumerRights #TransparencyMatters #EthicalTech
-
⚠️ The 2025 Edelman Trust Barometer finds a global “crisis of grievance” is eroding trust across the board – including a lack of trust in institutions and AI, a surge in discrimination fears, and a lack of optimism for the next generation. 🤖 This is a prescient report as more governments and businesses invest heavily in AI. A failure to address these grievances is likely to result in even lower levels of trust, meaning AI efforts may fail or, at least, fail to return the expected ROI. Not to mention the wider implications for organisations’ reputation and customer loyalty. It’s a sobering, must-read report. Some of the key points that stood out to me were: 🏛️ Grievance imposes a trust penalty, with all institutions (business, govt, NGOs and the media) facing these challenges. While NZers were not directly surveyed, the themes identified are globally consistent. ⚖️ Discrimination fears have surged to an all-time high. These fears are only likely to get worse, including ongoing concerns about the perpetuation of societal bias by AI systems. 🤖 Those who express greater levels of grievance are more suspicious of AI and have less trust in business leaders like CEOs. And business is not seen as going far enough to address issues like retraining, misinformation and discrimination – all highly relevant in an AI context. Looking at this through a privacy and AI governance lens, organisations need to: ✅ Implement Responsible AI frameworks to reduce the risk of privacy violations, discrimination, IP misuse and other illegal and unethical outcomes, ultimately fostering trust among stakeholders. 👉 See https://lnkd.in/gJu47URq ✅ Invest in Privacy and Responsible AI-by-Design to embed good privacy and AI governance practices from the start and demonstrate a commitment to safeguarding people and their rights. 👉 See https://lnkd.in/gNAenAy6 ✅ Be transparent about data and AI use. 👉 See https://lnkd.in/ge8h3nRG ✅ Assess your data and AI use using Privacy Impact Assessments and/or AI Impact Assessments to identify and mitigate risks proactively. 👉 See https://lnkd.in/gHWqwSkW ✅ Educate stakeholders: Provide training for staff on how to ensure responsible AI use. 👉 See our e-learning modules (https://lnkd.in/gC-4dZPs) and training courses(https://lnkd.in/g6U3CBCx). By addressing these areas, organisations can demonstrate leadership, reduce risk and re-build trust in an era of rapid advancement in data consumption and AI. #trust #responsibleai #aigovernance #privacy Simply Privacy Emma Pond Daimhin Warner
-
Too many companies today are treating data privacy as a legal strategy, rather than a core responsibility to their users. This mentality puts companies on the defensive, focusing more on avoiding lawsuits than on building trust through real privacy protections. The Flo Health case highlights this problem. A class-action lawsuit in Canada alleges the app shared sensitive health data with third parties like Facebook, without users' proper consent. But instead of viewing this as a call to improve their privacy practices, many companies are reacting by introducing waivers that shield them from class-action lawsuits, as if legal protection should be their main priority. This is entirely the wrong lesson to learn. Instead of hiding behind waivers, femtech companies should invest in data systems that actually protect users. Your goal should be to prevent breaches and misuse before they happen, not scramble to avoid legal consequences after the fact. Trust comes from protecting the users’ data from the start—waivers won’t repair that trust once it’s broken. Real security means putting user protection first, not just preparing for potential backlash. When you prioritize privacy, legal protection follows naturally—because a company built on trust doesn’t need to rely on legal loopholes.
-
Corporate Governance for Technology- A Journey To The Boardroom class by Ntomi Mark & Ankunda .P. Kaana These gentlemen dug deep into modern governance challenges, using the Facebook-Cambridge Analytica scandal as a critical case study. This stark example underscored how data protection failures can erode trust, trigger regulatory wrath, and destabilize organizations. Why Boards cannot afford to ignore this. 1️⃣ Data privacy is no longer “just compliance”—it’s a board-level strategic priority tied to reputation and risk. 2️⃣ Dynamic frameworks win: Static policies won’t cut it. Companies must regularly audit and adapt privacy strategies to evolving threats. 3️⃣ Trust = competitive advantage: Organizations with robust privacy frameworks outperform peers in customer loyalty and regulatory resilience. One jarring reflection: How often do we accept All Cookies or ignore the long Terms and Conditions while signing up for different Apps and services? Grateful to Ntomi Mark and Ankunda .P. Kaana for spotlighting why vigilance isn’t optional—it’s foundational to ethical leadership in the digital age. #DataPrivacy #CorporateGovernance #JTBcohort2025
-
Google's cookies announcement isn't the week's big news; Oracle's $115 million privacy settlement is. 👇🏼 This week's most important news headline is: "Oracle's $115 million privacy settlement could change industry data collection methods." Every marketer and media leader should understand the allegations in the complaint and execute a review of their data strategy, policies, processes, and protocols, especially as they pertain to third-party data. While we've been talking and fretting about cookie deprecation for four years, we've missed the plot on data permission and usage. It's time to get our priorities straight. Article in the comments section and Industry reaction from legal and data experts below. Jason Barnes, partner at the Simmons Hanly Conroy law firm: "This case is groundbreaking. The allegations in the complaint were that Oracle was building detailed dossiers about consumers with whom it had no first-party relationship. Rather than face a jury, Oracle agreed to a significant monetary settlement and also announced it was getting out of the business," Barnes said. "The big takeaway is that surveillance tech companies that lack a first-party relationship with consumers have a significant problem: no American has actually consented to having their personal information surveilled everywhere they go by a company they've never heard of, packaged into a commoditized dossier, and then monetized and sold without their knowledge." Debbie Reynolds, Founder, Chief Executive Officer, and Chief Data Privacy Officer at Debbie Reynolds Consulting, LLC: "Oracle's privacy case settlement is a significant precedent and highlights that privacy risks are now recognized as business risks, with reduced profits, increased regulatory pressure, and higher consumer expectations impacting organizations' bottom lines," Reynolds said. "One of the most important features of this settlement is Oracle's agreement to stop collecting user-generated information from external URLs and online forms, which is a significant concession in how they do business. Other businesses should take note." #marketing #data #media Ketch super{set}
-
Today's announcement of Meta and its executives, including Mark Zuckerberg, reaching a confidential settlement to end the $8 billion shareholder suit marks a significant moment in corporate accountability. The resolution, while providing immediate relief and avoiding a high-stakes public trial, raises questions about the achieved level of accountability due to undisclosed terms. The case, stemming from alleged failures to adhere to the 2012 FTC consent decree and highlighted by incidents like Cambridge Analytica, prompts a broader discussion on governance and data ethics within the realm of Big Tech. As scrutiny on these issues escalates, the impact of this settlement extends to how boards navigate reputational and regulatory risks in the future. Investors and users will keenly observe not only the financial implications but also anticipate tangible progress in safeguarding personal data. This development underscores the evolving landscape of corporate responsibility and the growing emphasis on transparency and ethical practices in the digital age. Read more about this pivotal moment in corporate governance: https://lnkd.in/gzxmZEYh #Meta #CorporateGovernance #DataPrivacy #CambridgeAnalytica #TechEthics #InvestorRights #BoardAccountability #BigTech #DigitalTrust #FTC #PrivacyMatters #ReputationRisk #AIandEthics #LeadershipInTech