The Changing Landscape of Privacy Practices

Explore top LinkedIn content from expert professionals.

Summary

The changing landscape of privacy practices reflects the evolving challenges and solutions in data protection, particularly in the age of artificial intelligence. With the increasing reliance on AI and machine learning technologies, traditional privacy frameworks struggle to address complex issues like data sovereignty, consent, and algorithmic transparency.

  • Prioritize privacy by design: Develop systems with embedded privacy controls, addressing data management, AI transparency, and mitigation measures from the outset to build user trust and reduce risks over time.
  • Adapt to new regulations: Stay informed about global and regional data protection laws, like GDPR, and ensure compliance by updating internal policies and engaging with knowledgeable legal and technical experts.
  • Evaluate AI data practices: Regularly assess how personal data is collected, transformed, stored, and managed during the AI lifecycle, and use privacy-enhancing technologies to minimize risks and ensure data security.
Summarized by AI based on LinkedIn member posts
  • View profile for Katharina Koerner

    AI Governance & Security I Trace3 : All Possibilities Live in Technology: Innovating with risk-managed AI: Strategies to Advance Business Goals through AI Governance, Privacy & Security

    44,353 followers

    This new white paper by Stanford Institute for Human-Centered Artificial Intelligence (HAI) titled "Rethinking Privacy in the AI Era" addresses the intersection of data privacy and AI development, highlighting the challenges and proposing solutions for mitigating privacy risks. It outlines the current data protection landscape, including the Fair Information Practice Principles, GDPR, and U.S. state privacy laws, and discusses the distinction and regulatory implications between predictive and generative AI. The paper argues that AI's reliance on extensive data collection presents unique privacy risks at both individual and societal levels, noting that existing laws are inadequate for the emerging challenges posed by AI systems, because they don't fully tackle the shortcomings of the Fair Information Practice Principles (FIPs) framework or concentrate adequately on the comprehensive data governance measures necessary for regulating data used in AI development. According to the paper, FIPs are outdated and not well-suited for modern data and AI complexities, because: - They do not address the power imbalance between data collectors and individuals. - FIPs fail to enforce data minimization and purpose limitation effectively. - The framework places too much responsibility on individuals for privacy management. - Allows for data collection by default, putting the onus on individuals to opt out. - Focuses on procedural rather than substantive protections. - Struggles with the concepts of consent and legitimate interest, complicating privacy management. It emphasizes the need for new regulatory approaches that go beyond current privacy legislation to effectively manage the risks associated with AI-driven data acquisition and processing. The paper suggests three key strategies to mitigate the privacy harms of AI: 1.) Denormalize Data Collection by Default: Shift from opt-out to opt-in data collection models to facilitate true data minimization. This approach emphasizes "privacy by default" and the need for technical standards and infrastructure that enable meaningful consent mechanisms. 2.) Focus on the AI Data Supply Chain: Enhance privacy and data protection by ensuring dataset transparency and accountability throughout the entire lifecycle of data. This includes a call for regulatory frameworks that address data privacy comprehensively across the data supply chain. 3.) Flip the Script on Personal Data Management: Encourage the development of new governance mechanisms and technical infrastructures, such as data intermediaries and data permissioning systems, to automate and support the exercise of individual data rights and preferences. This strategy aims to empower individuals by facilitating easier management and control of their personal data in the context of AI. by Dr. Jennifer King Caroline Meinhardt Link: https://lnkd.in/dniktn3V

  • View profile for Janel Thamkul

    Deputy General Counsel @ Anthropic | AI + Emerging Tech Law | ex-Google

    7,137 followers

    The rapid advancement of AI technologies, particularly LLMs, has highlighted important questions about the application of privacy laws like the GDPR. As someone who has been grappling with this issue for years, I am *thrilled* to see the Hamburg DPC's discussion paper approach privacy risks and AI with a deep understanding of the technology. A few absolutely refreshing takeaways: ➡ LLMs process tokens and vectorial relationships between tokens (embeddings), fundamentally differing from conventional data storage and retrieval. The Hamburg DPC finds that LLMs don't "process" or "store" personal data within the meaning of the GDPR. ➡ Unlike traditional identifiers, tokens and their embeddings in LLMs lack the necessary direct, targeted association to individuals that characterizes personal data in CJEU jurisprudence. ➡ Memorization attacks that extract training data from an LLM don't necessarily conclude that personal data is stored in the LLM. These attacks may be practically disproportionate and potentially legally prohibited, making personal identification not "possible" under the legislation. ➡ Even if personal data was unlawfully processed in developing the LLM, it doesn't render the use of the resulting LLM illegal (providing downstream deployers some comfort when leveraging third-party models). This is a nuanced and technology-informed perspective on the complex intersection of AI and privacy. As we continue to navigate this rapidly evolving landscape, I hope we see more regulators and courts approach regulation and legal compliance with a deep understanding of how the technology actually works. #AI #Privacy #GDPR #LLM

  • View profile for Pradeep Sanyal

    Enterprise AI Leader | Former CIO & CTO | Chief AI Officer (Advisory) | Data & AI Strategy → Implementation | 0→1 Product Launch

    19,114 followers

    Privacy isn’t a policy layer in AI. It’s a design constraint. The new EDPB guidance on LLMs doesn’t just outline risks. It gives builders, buyers, and decision-makers a usable blueprint for engineering privacy - not just documenting it. The key shift? → Yesterday: Protect inputs → Today: Audit the entire pipeline → Tomorrow: Design for privacy observability at runtime The real risk isn’t malicious intent. It’s silent propagation through opaque systems. In most LLM systems, sensitive data leaks not because someone intended harm but because no one mapped the flows, tested outputs, or scoped where memory could resurface prior inputs. This guidance helps close that gap. And here’s how to apply it: For Developers: • Map how personal data enters, transforms, and persists • Identify points of memorization, retention, or leakage • Use the framework to embed mitigation into each phase: pretraining, fine-tuning, inference, RAG, feedback For Users & Deployers: • Don’t treat LLMs as black boxes. Ask if data is stored, recalled, or used to retrain • Evaluate vendor claims with structured questions from the report • Build internal governance that tracks model behaviors over time For Decision-Makers & Risk Owners: • Use this to complement your DPIAs with LLM-specific threat modeling • Shift privacy thinking from legal compliance to architectural accountability • Set organizational standards for “commercial-safe” LLM usage This isn’t about slowing innovation. It’s about future-proofing it. Because the next phase of AI scale won’t just be powered by better models. It will be constrained and enabled by how seriously we engineer for trust. Thanks European Data Protection Board, Isabel Barberá H/T Peter Slattery, PhD

  • View profile for Colin S. Levy
    Colin S. Levy Colin S. Levy is an Influencer

    General Counsel @ Malbek - CLM for Enterprise | Adjunct Professor of Law | Author of The Legal Tech Ecosystem | Legal Tech Educator | Fastcase 50 (2022)

    45,406 followers

    As a veteran SaaS lawyer, I've watched Data Processing Agreements (DPAs) evolve from afterthoughts to deal-breakers. Let's dive into why they're now non-negotiable and what you need to know: A) DPA Essentials Often Overlooked: -Subprocessor Management: DPAs should detail how and when clients are notified of new subprocessors. This isn't just courteous - it's often legally required. -Cross-Border Transfers: Post-Schrems II, mechanisms for lawful data transfers are crucial. Standard Contractual Clauses aren't a silver bullet anymore. -Data Minimization: Concrete steps to ensure only necessary data is processed. Vague promises don't cut it. -Audit Rights: Specific procedures for controller-initiated audits. Without these, you're flying blind on compliance. -Breach Notification: Clear timelines and processes for reporting data breaches. Every minute counts in a crisis. B) Why Cookie-Cutter DPAs Fall Short: -Industry-Specific Risks: Healthcare DPAs need HIPAA provisions; fintech needs PCI-DSS compliance clauses. One size does not fit all. -AI/ML Considerations: Special clauses for automated decision-making and profiling are essential as AI becomes ubiquitous. -IoT Challenges: Addressing data collection from connected devices. The 'Internet of Things' is a privacy minefield. -Data Portability: Clear processes for returning data in usable formats post-termination. Don't let your data become a hostage. -Privacy by Design: Embedding privacy considerations into every aspect of data processing. It's not just good practice - it's the law. In 2024, with GDPR fines hitting €1.4 billion, generic DPAs are a liability, not a safeguard. As AI and IoT reshape data landscapes, DPAs must evolve beyond checkbox exercises to become strategic tools. Remember, in the fast-paced tech industry, knowledge of these agreements isn't just useful – it's essential. They're not just legal documents – they're the foundation for innovation and collaboration in our digital age. Pro tip: Review your DPAs quarterly. The data world moves fast - your agreements should keep pace. Pay special attention to changes in data protection laws, new technologies you're adopting, and shifts in your data processing activities. Clear, well-structured DPAs prevent disputes and protect all parties' interests. What's the trickiest DPA clause you've negotiated? Share your war stories below. #legaltech #innovation #law #business #learning

  • View profile for James Dempsey

    Managing Director, IAPP Cybersecurity Law Center, and Senior Policy Advisor, Stanford Program on Geopolitics, Technology and Governance

    6,003 followers

    Privacy isn't just about privacy anymore (and maybe never was). That's my takeaway from a fascinating new report from IAPP - International Association of Privacy Professionals. As regulations related to privacy, AI governance, cybersecurity, and other areas of digital responsibility rapidly expand and evolve around the globe, organizations are taking a more holistic approach to their values and strategies related to data. One indicator: over 80% of privacy teams now have responsibilities that extend beyond privacy. Nearly 70% of chief privacy officers surveyed by IAPP have acquired additional responsibility for AI governance, 69% are now responsible for data governance and data ethics, 37% for cybersecurity regulatory compliance, and 20% for platform liability. And, in my opinion, if privacy teams don't have official responsibility for other areas of data governance (AI, data ethics, cybersecurity), they should surely be coordinating with those other teams. https://lnkd.in/gM8WGx9T

  • View profile for Richard Lawne

    Privacy & AI Lawyer

    2,654 followers

    I'm increasingly convinced that we need to treat "AI privacy" as a distinct field within privacy, separate from but closely related to "data privacy". Just as the digital age required the evolution of data protection laws, AI introduces new risks that challenge existing frameworks, forcing us to rethink how personal data is ingested and embedded into AI systems. Key issues include: 🔹 Mass-scale ingestion – AI models are often trained on huge datasets scraped from online sources, including publicly available and proprietary information, without individuals' consent. 🔹 Personal data embedding – Unlike traditional databases, AI models compress, encode, and entrench personal data within their training, blurring the lines between the data and the model. 🔹 Data exfiltration & exposure – AI models can inadvertently retain and expose sensitive personal data through overfitting, prompt injection attacks, or adversarial exploits. 🔹 Superinference – AI uncovers hidden patterns and makes powerful predictions about our preferences, behaviours, emotions, and opinions, often revealing insights that we ourselves may not even be aware of. 🔹 AI impersonation – Deepfake and generative AI technologies enable identity fraud, social engineering attacks, and unauthorized use of biometric data. 🔹 Autonomy & control – AI may be used to make or influence critical decisions in domains such as hiring, lending, and healthcare, raising fundamental concerns about autonomy and contestability. 🔹 Bias & fairness – AI can amplify biases present in training data, leading to discriminatory outcomes in areas such as employment, financial services, and law enforcement. To date, privacy discussions have focused on data - how it's collected, used, and stored. But AI challenges this paradigm. Data is no longer static. It is abstracted, transformed, and embedded into models in ways that challenge conventional privacy protections. If "AI privacy" is about more than just the data, should privacy rights extend beyond inputs and outputs to the models themselves? If a model learns from us, should we have rights over it? #AI #AIPrivacy #Dataprivacy #Dataprotection #AIrights #Digitalrights

  • View profile for Manish Sood

    Chief Executive Officer, Founder & Chairman at Reltio

    14,892 followers

    President Biden’s recent Executive Order on AI leaves one key issue open that remains top of mind for most organizations today – data privacy. The order calls Congress to pass “bipartisan data privacy legislation” to protect Americans’ data. As we embrace the power of AI, we must also recognize the morphing challenges of data privacy in the context of data sovereignty. The rules are constantly changing, and organizations need flexibility to maintain compliance just in their home countries but also in every country in which they operate. Governments worldwide, from the European Union with its GDPR to India's Personal Data Protection Bill, are setting stringent regulations to protect their citizens' data. The essence? Data about a nation's citizens or businesses should only reside on systems within their legal and regulatory purview. We all know AI is a game-changer but also a voracious consumer of data and a complicating factor for data sovereignty. Especially with Generative AI, which consumes data indiscriminately, often stored and processed at the AI companies' discretion. This collision between AI's insatiable appetite for data, the temptation for organizations to use it, and global data sovereignty regulations present a unique challenge for businesses. With the right approach, businesses can harness the power of AI while respecting data sovereignty. Here are a few ideas on how: Mindset: Make data sovereignty a company-wide priority. It's not just an IT or legal concern; it's a business imperative. Every team member should understand the risks associated with non-compliance. Inventory: Know your data. With large enterprises storing data in over 800 applications on average, it's crucial to maintain an inventory of your company's data and be aware of the vendors interacting with it. Governance: Stay updated with regional data laws and ensure compliance. Data sovereignty requires governance to be local also. Vendor Compliance: Your external vendors should be in lockstep with your data policies. Leverage Data Unification Solutions: Use flexible, scalable tools to ensure data sovereignty compliance. Data unification and management tools powered by AI can detect data leakages, trace data lineage, and ensure data remains within stipulated borders. I’ve witnessed how this can be accomplished in many industries, including healthcare. Despite stringent privacy and sovereignty policies, many healthcare management systems demonstrate that robust data management, compliant with regulations, is achievable. The key is designing systems with data management policies from the outset. To all global organizations: Embrace the future, but let's do it responsibly. Data privacy and sovereignty are not a hurdle; it's a responsibility we must uphold for the trust of our customers and the integrity of our businesses. Planning for inevitable changes now will pay dividends in the future. #data

  • View profile for Amrit Jassal

    CTO at Egnyte Inc

    2,466 followers

    Generative AI offers transformative potential, but how do we harness it without compromising crucial data privacy? It's not an afterthought — it's central to the strategy. Evaluating the right approach depends heavily on specific privacy goals and data sensitivity. One starting point, with strong vendor contracts, is using the LLM context window directly. For larger datasets, Retrieval-Augmented Generation (RAG) scales well. RAG retrieves relevant information at query time to augment the prompt, which helps keep private data out of the LLM's core training dataset. However, optimizing RAG across diverse content types and meeting user expectations for structured, precise answers can be challenging. At the other extreme lies Self-Hosting LLMs. This offers maximum control but introduces significant deployment and maintenance overhead, especially when aiming for the capabilities of large foundation models. For ultra-sensitive use cases, this might be the only viable path. Distilling larger models for specific tasks can mitigate some deployment complexity, but the core challenges of self-hosting remain. Look at Apple Intelligence as a prime example. Their strategy prioritizes user privacy through On-Device Processing, minimizing external data access. While not explicitly labeled RAG, the architecture — with its semantic index, orchestration, and LLM interaction — strongly resembles a sophisticated RAG system, proving privacy and capability can coexist. At Egnyte, we believe robust AI solutions must uphold data security. For us, data privacy and fine-grained, authorized access aren't just compliance hurdles; they are innovation drivers. Looking ahead to advanced Agent-to-Agent AI interactions, this becomes even more critical. Autonomous agents require a bedrock of trust, built on rigorous access controls and privacy-centric design, to interact securely and effectively. This foundation is essential for unlocking AI's future potential responsibly.

  • View profile for Jodi Daniels

    Practical Privacy Advisor / Fractional Privacy Officer / AI Governance / WSJ Best Selling Author / Keynote Speaker

    19,760 followers

    In AI tools, the fine print isn’t optional. It’s everything. Recently checked out a cool new AI tool that promised awesome graphics. First red flag? No mention of data use, privacy or security on the site. Second red flag? Reading the terms of service, it said it takes no responsibility - it's all the LLMs it uses. Third red flag? Same terms say it can use the data for its own use. Fourth red flag? Same terms specifically state do not upload confidential information. Even if my content would be outward facing, I don't want to knowingly share my information to a third party who then shares it with LLMs and uses it for themselves. This was just my simple one AI tool review. Managing AI privacy risks is critical for all companies to do, no matter the size. Here are 5 tips to help manage AI risk: 1. Strengthen Your Data Governance Create a cross-functional team to develop clear policies on AI use cases. Consider third-party data access and usage, how AI will be used within the business, and if it involves sensitive data. Pro Tip: Use frameworks like NIST’s Data Privacy Framework to guide your efforts. 2. Conduct Privacy Impact Assessments (PIAs) for AI Review your existing PIA processes to determine if AI can be integrated into the assessment process. Assess AI-specific risks like bias, ethics, discrimination, and data inferences often made by AI models. 3. Train Your Team on AI Transparency Develop ongoing training programs to increase awareness of AI and how it intersects with privacy and employee roles. 4. Address Privacy Rights Challenges Posed by AI Determine how you will uphold privacy rights once data is embedded in a model. Consider how you will handle requests for access, portability, rectification, erasure, and processing restrictions. Remember, privacy notices should include provisions about how AI is used. 5. Manage Third-Party AI Vendors Carefully Ask vendors where they get their AI model, what kind of data is used to train the AI, and how often they refresh their data. Determine how vendors handle bias, inaccuracies, or underrepresentation in the AI’s outputs. Audit AI vendors and contracts regularly to identify new risks.   AI’s potential is immense, but so are the challenges it brings.   Be proactive. Build trust. Stay ahead.   Learn more in our carousel and blog link below 👇

  • We get asked a lot at Privatus and PrivacyCode.AI what the implications are as AI and other forms of machine learning get "smarter". I am a first principles/ pragmatic kinda gal, so I like to break things down into categories and workflows to tackle the complex stuff. The policy and legal complex stuff run in tandem and must be inegrated in this discussion, but we will KISS today. IoT, Big Data, Data Lakes, Distributed Compute, AI/ML and Quantum compute share several common confounding attributes: 1. Quantity of data 2. Source/ value / integrity of data gathering fidelity 3. Ownership/legal integrity provenance throughout lifecycle for datum and data sets 4. Time 5. Quality 6. People 7. Inherent or imminent Systems controls & capabilities for adaptation As the quantity increases, the marketing and sales and venture communities become excited. Shiny New Things!!!!! Since Bohr's Law only applies to the various states of agitation of electrons, I shall arrogantly (with tongue firmly in cheek) suggest Dennedy's Law in which the greater the quantity of data, the greater the ability for data privacy & protection chaos, given distance from the originating (&/or described) person and the original purpose for which that datum was collected or observed with greatest degradation over time. i.e. The more the ungoverned sparkle magic "automagic" "convenience" "personalization" "monetization exponentially" behind your back, the greater the risk of getting things wrong and causing havoc and harm with Shiny New Thing. In short, the garbage in, garbage out rules of compute are supremely important when we attempt to use predictive or even sorting analytics across data sets. GIGO stinks more over Time and when passed from third party to third party. (Entire companies in the startup world are dedicated to data deodorant or sniffing exercises.) Privacy Engineering should contemplate data in both its original state as well as in future recombinant states and your program and tools should reflect that risks given this reality accordingly. (You really do need a platform for this although your spreadsheets are cute.) Where we cannot control precisely with technology-- by encrypting things or by "anonymizing" things or by getting someone's permission for things or by getting a contract or a treaty signed to move things-- we must also provide an additional layer of what I call Ethics Engineering to contemplate and plan for unforeseen or inhumane practices to be able to audit for evidence of their occurrence and take corrective or compensating action. This is a simple question with lifetimes of careers and study and tools spooling far into the future but the GIGO economy is only gathering momentum. There is much to build my friends!! Onward!! #AFF #PrivacyEngineering #PrivacyCode.AI

Explore categories