🚨 BREAKING: An extremely important lawsuit in the intersection of PRIVACY and AI was filed against Otter over its AI meeting assistant's lack of CONSENT from meeting participants. If you use meeting assistants, read this: Otter, the AI company being sued, offers an AI-powered service that, like many in this business niche, can transcribe and record the content of private conversations between its users and meeting participants (who are often NOT users and do not know that they are being recorded). Various privacy laws in the U.S. and beyond require that, in such cases, consent from meeting participants is obtained. The lawsuit specifically mentions: - The Electronic Communications Privacy Act; - The Computer Fraud and Abuse Act; - The California Invasion of Privacy Act; - California’s Comprehensive Computer Data and Fraud Access Act; - The California common law torts of intrusion upon seclusion and conversion; - The California Unfair Competition Law; As more and more people use AI agents, AI meeting assistants, and all sorts of AI-powered tools to "improve productivity," privacy aspects are often forgotten (in yet another manifestation of AI exceptionalism). In this case, according to the lawsuit, the company has explicitly stated that it trains its AI models on recordings and transcriptions made using its meeting assistant. The main allegation is that Otter obtains consent only from its account holders but not from other meeting participants. It asks users to make sure other participants consent, shifting the privacy responsibility. As many of you know, this practice is common, and various AI companies shift the privacy responsibility to users, who often ignore (or don't know) what national and state laws actually require. So if you use meeting assistants, you should know that it's UNETHICAL and in many places also ILLEGAL to record or transcribe meeting participants without obtaining their consent. Additionally, it's important to have in mind that AI companies might use this data (which often contains personal information) to train AI, and there could be leaks and other privacy risks involved. - 👉 Link to the lawsuit below. 👉 Never miss my curations and analyses on AI's legal and ethical challenges: join my newsletter's 74,000+ subscribers. 👉 To learn more about the intersection of privacy and AI (and many other topics), join the 24th cohort of my AI Governance Training in October.
Artificial Intelligence
Explore top LinkedIn content from expert professionals.
-
-
🤖 Best chance to have well-informed discussions on AI : #AI Bible accessible for free ! 🗞️ The Cambridge Handbook on the Law, Ethics, and Policy of Artificial Intelligence, 2025 👓 contributions from experts 👓 theoretical insights and practical examples of AI applications The Handbook examines: 🔹the legal, ethical, and policy challenges of AI & algorithmic systems esp. in #Europe 🔹the societal impact of these technologies 🔹the legal frameworks that regulate them 📚 18 chapters 🎓 I : AI, ETHICS AND PHILOSOPHY 1 AI: A Perspective from the Field 2 Philosophy of AI: A Structured Overview 3 Ethics of AI: Toward a "Design for Values" Approach 4 Fairness and Artificial Intelligence 5 Moral Responsibility and Autonomous Technologies: Does AI Face a Responsibility Gap? 6 AI, Power and Sustainability ⚖️ II : AI, LAW AND POLICY 7 AI Meets the GDPR: Navigating the Impact of Data Protection on AI Systems 8 Tort Liability and AI 9 Al and Competition Law 10 Al and Consumer Protection 11 Al and Intellectual Property Law 12 The European Union's AI Act 🤖 III AI ACROSS SECTORS 13 Al and Education 14 Al and Media 15 Al and Healthcare Data 16 Al and Financial Services 17 Al and Labor Law 18 Legal, Ethical, and Social Issues of AI and Law Enforcement in Europe: The Case of Predictive Policing 👏🏼 Edited by Nathalie Smuha legal scholar at KU Leuven who specializes in AI’s impact on human rights, democracy, and the rule of law. 🔗 Cambridge University Press & Assessment .
-
The expansion of robots and automation is poised to significantly transform the job market and has complex implications for inequality. What do you think? Impact on Jobs: 1. Job Displacement: Robots and automation are likely to replace repetitive, manual, and routine jobs (e.g., manufacturing, logistics, and data entry). Some middle-skill jobs may also be at risk as automation technologies become more sophisticated. 2. Job Creation: New roles will emerge in robotics maintenance, programming, AI development, and other tech-focused fields. Demand for human-centric jobs, such as healthcare, education, and creative industries, may increase as these areas are harder to automate. 3. Job Evolution: Many jobs will change in scope, requiring workers to collaborate with robots or leverage automation tools for productivity. Impact on Inequality: 1. Widening Skill Gap: Workers with higher education and tech-savvy skills are more likely to benefit, while those in low-skill jobs may struggle to adapt. This divergence could exacerbate income inequality if reskilling programs are not widespread. 2. Geographic Disparities: Advanced economies with resources to invest in automation could benefit more than developing countries, increasing global inequality. 3. Ownership of Technology: Concentration of robot and AI ownership among corporations and wealthy individuals might widen wealth disparities unless equitable policies (e.g., profit sharing, taxes) are implemented. Mitigating Inequality: 1. Education and Reskilling: Governments and companies need to invest in upskilling and reskilling workers to prepare them for the jobs of the future. 2. Universal Basic Income (UBI): UBI or similar safety nets could help address income gaps caused by job displacement. 3. Fair Policies: Regulations around labor, taxation, and profit sharing could ensure that the economic benefits of automation are distributed more equitably. 4. Support for Vulnerable Sectors: Strengthening social welfare systems and providing targeted support for industries and workers most at risk. Video: @discover_our_planet_ #Innovation #Technology #Inequality
-
🚨 The AI Layoffs Begin – But What’s Really Behind the Headlines? Over 50,000 tech workers have been laid off in just the first six months of 2025. Microsoft. Google. Meta. IBM. Duolingo. Klarna. These aren’t startups tightening budgets — they’re the AI giants. So what’s driving this? It’s not just about cost-cutting. It’s not even purely about automation. It’s about AI alignment. A redefinition of what productivity looks like in an AI-first economy. 🔍 Microsoft is letting go of veteran engineers — including one who made TypeScript 10x faster — while investing $80B into AI this year. 30% of their code is now written by machines. 🛠️ IBM is cutting HR roles while hiring more engineers and salespeople — pivoting the workforce to AI-ready profiles. This is not about replacing humans. It’s about making space for algorithms. For scalable solutions. For a different kind of workforce. But as we celebrate efficiency, let’s pause and ask: What happens when we lose human mentorship, judgment, and creativity in the process? ⚠️ We’re entering an era where efficiency may win over empathy. And that has consequences — for culture, inclusion, and leadership. Let’s flip the narrative. AI doesn’t have to be the villain. But we must adapt. ✅ Upskill in AI collaboration — prompt engineering, oversight, ethics. ✅ Double down on human strengths — empathy, strategy, leadership. ✅ Build inclusive programs — like our AI Leadership Roadmap at net4tec | 4 WoMen Careers in Technology — that close the gender gap and prepare diverse talents for AI-driven futures. Because AI won’t replace all jobs. Just the ones that don’t evolve. This moment is not the end — it’s a call to lead. 👉 Are you future-proofing your role? Or waiting until it’s your turn? #FutureOfWork #AI #Leadership #AIAlignment #TheNewFaceOfLeadership #DigitalTransformation #Upskilling #Inclusion #AILeadershipRoadmap #net4tec
-
The guide "AI Fairness in Practice" by The Alan Turing Institute from 2023 covers the concept of fairness in AI/ML contexts. The fairness paper is part of the AI Ethics and Governance in Practice Program (link: https://lnkd.in/gvYRma_R). The paper dives deep into various types of fairness: DATA FAIRNESS includes: - representativeness of data samples, - collaboration for fit-for-purpose and sufficient data quantity, - maintaining source integrity and measurement accuracy, - scrutinizing timeliness, and - relevance, appropriateness, and domain knowledge in data selection and utilization. APPLICATION FAIRNESS involves considering equity at various stages of AI project development, including examining real-world contexts, addressing equity issues in targeted groups, and recognizing how AI model outputs may shape decision outcomes. MODEL DESIGN AND DEVELOPMENT FAIRNESS involves ensuring fairness at all stages of the AI project workflow by - scrutinizing potential biases in outcome variables and proxies during problem formulation, - conducting fairness-aware design in preprocessing and feature engineering, - paying attention to interpretability and performance across demographic groups in model selection and training, - addressing fairness concerns in model testing and validation, - implementing procedural fairness for consistent application of rules and procedures. METRIC-BASED FAIRNESS utilizes mathematical mechanisms to ensure fair distribution of outcomes and error rates among demographic groups, including: - Demographic/Statistical Parity: Equal benefits among groups. - Equalized Odds: Equal error rates across groups. - True Positive Rate Parity: Equal accuracy between population subgroups. - Positive Predictive Value Parity: Equal precision rates across groups. - Individual Fairness: Similar treatment for similar individuals. - Counterfactual Fairness: Consistency in decisions. The paper further covers SYSTEM IMPLEMENTATION FAIRNESS, incl. Decision-Automation Bias (Overreliance and Overcompliance), Automation-Distrust Bias, contextual considerations for impacted individuals, and ECOSYSTEM FAIRNESS. -- Appendix A (p 75) lists Algorithmic Fairness Techniques throughout the AI/ML Lifecycle, e.g.: - Preprocessing and Feature Engineering: Balancing dataset distributions across groups. - Model Selection and Training: Penalizing information shared between attributes and predictions. - Model Testing and Validation: Enforcing matching false positive/negative rates. - System Implementation: Allowing accuracy-fairness trade-offs. - Post-Implementation Monitoring: Preventing model reliance on sensitive attributes. -- The paper also includes templates for Bias Self-Assessment, Bias Risk Management, and a Fairness Position Statement. -- Link to authors/paper: https://lnkd.in/gczppH29 #AI #Bias #AIfairness
-
Demand forecasting errors silently bleed profits and cash. This document shows 7 red flags in demand forecasting and how to fix them: 1️⃣ Over-reliance on historical data ↳ How to fix: incorporate external data like market trends, competitor activity, and consumer sentiment to enrich forecasts 2️⃣ Ignoring promotions and discounts ↳ How to fix: build a promotions-adjusted forecasting model, considering historical uplift from similar campaigns 3️⃣ Forgetting cannibalization effects ↳ How to fix: model cannibalization effects to adjust forecasts for existing products 4️⃣ One-size-fits-all forecasting method ↳ How to fix: use demand segmentation (for example, high variability vs. stable demand); do not treat all SKUs equally 5️⃣ Not monitoring forecast accuracy ↳ How to Fix: track metrics like MAPE, WMAPE, bias, to improve over time 6️⃣ High forecast error with no accountability ↳ How to fix: tie accountability to S&OP (sales and operations) meetings 7️⃣ Past sales (instead of demand) consideration ↳ How to fix: make the initial predictions based on the unconstrained demand; not on sales that are impacted by cuts and out of stock situations Any others to add?
-
When L'Oréal uses AI to create new hair colors based on social media trends, they're in salons within weeks. Kraft Heinz—dead last in our study—still takes months to tweak a formula. After analyzing 26 major CPG companies at IMD's Center for Future Readiness, I discovered what separates winners from losers: The most future-ready companies treat consumer data like insider trading information. BACKGROUND: CPG in 2025 is brutal. Inflation persists. Gen-Z demands sustainability without premiums. Tariffs reshape supply chains daily. McKinsey & Company identified 150+ AI use cases for CPG transformation. Only 5 of 26 companies actually execute them. THE REVELATION: Coca-Cola didn't randomly launch Topo Chico Hard Seltzer. Their AI spotted the trend through social listening while competitors debated in boardrooms. By launch, they'd secured distribution nationwide. That's not innovation. That's prediction. What separates the top 5: L'Oréal (#1): 3.5% of sales to R&D. AI analyzes preferences real-time. Virtual try-on apps. Creates products from social trends. A 110-year company with startup velocity. The Coca-Cola Company (#2): Democratized AI internally. Every manager accesses demand forecasting. They analyze weather + social sentiment + sales simultaneously. These aren't tech companies selling beauty and beverages. They're prediction machines that happen to make products. THE WINNER'S FRAMEWORK: 1. AI at scale, not in pilots Winners integrate into workflows. Losers run demos. 2. Supply chains that anticipate Real-time visibility + AI forecasting = competitive firepower 3. D2C as intelligence goldmine 73% use multiple channels. Mine every interaction. 4. Disrupt yourself first Coca-Cola launched Costa Coffee, hard seltzers. Grew. Kraft Heinz protected legacy brands. Shrank. 5. Sustainable without premium Gen-Z spending hits $12T by 2030. They demand action at everyday prices. —— The inconvenient truth: Most CPG companies treat data like reporting instead of radar. Winners don't predict trends—they're already shipping products while competitors debate. Technological patience (knowing when to scale) + organizational agility (pivoting fast) = market domination. Three years from now, every CPG company operates like L'Oréal. Or they don't operate at all. P.S. Full Future Readiness Indicator here: https://bit.ly/3YTBzbX
-
Over the past 10 weeks, I’ve interviewed 35 talent and learning leaders at Fortune 1000 companies for a report I’ll be releasing this fall. One of my favorite questions has been the very first one: 𝐖𝐡𝐚𝐭 𝐚𝐫𝐞 𝐲𝐨𝐮𝐫 𝐭𝐨𝐩 𝐭𝐡𝐫𝐞𝐞 𝐩𝐫𝐢𝐨𝐫𝐢𝐭𝐢𝐞𝐬 𝐫𝐢𝐠𝐡𝐭 𝐧𝐨𝐰?” With 105 priorities and counting, the responses vary widely given differences in industry, scope, and role (VP of Learning, talent, talent management, leadership development) but here is a slice of what has been shared so far: ➡️ AI and work transformation: Clarify what AI means for the workforce, its implications for roles, and how teams can adopt it to accelerate development and efficiency. ➡️ AI Coaching Pilot: Launch an AI-powered coaching pilot program across the organization to scale leadership development support. ➡️ Generative AI Upskilling: Upskill employees and leaders to effectively use generative AI in day-to-day work ➡️ Future of Work & Workforce Planning: Prepare for disruptions to job architecture by integrating human and digital workforces. Rethink responsibilities, structures, and collaboration models. ➡️ Change management: Embed change management capabilities at all levels, particularly around AI adoption. ➡️ New leadership Behaviors: Equip leaders with new capabilities to thrive in a changing environment, including adaptability, resilience, and the ability to lead in an AI-augmented workplace. ➡️ Skills and Career Paths - Creating paths by prioritized skills in our organization ➡️ Rethinking the Function: Redesign the talent and learning function to reflect disruption caused by AI ➡️ Change Leadership: Navigate a period of executive turnover and transition by stabilizing the leadership team, clarifying roles, and building confidence with functional business leaders. ➡️ Facilitating Connection: Partnering with our employee experience and workplace teams to use in-office team days for learning and connection ➡️ Linking Performance and Development: Redesign performance processes to connect directly to development, helping employees understand what growth means in practical and tangible terms. ➡️ Manager Development: Continue to strengthen manager capability and resources, ensuring managers are equipped to drive performance and support employee development ➡️ VP and SVP Development: Support and accelerate the growth of new vice presidents and senior vice presidents as they step into expanded leadership roles. ➡️ Building a Leadership Bench : Develop and execute a strategy for strengthening the leadership bench, with a focus on preparing our Top 200 leaders ➡️ AI/Learning : Using AI internally within the learning function and focusing on key skills in AI for client-facing practitioners ➡️ Academies For AI/Data Roles: Developing and rolling out an academy for our AI & Data Product Employees I’d love to hear your perspective: What stands out most to you about this list, or what themes are you seeing in this list?
-
Artificial intelligence has the potential to make the workplace much more accessible. 🗣️ Automatic speech recognition and visual description software are among the artificial intelligence technologies enhancing workplace accessibility like never before. With live captioning, voice commands and transcription capabilities, these tools can foster a more inclusive and productive work environment. 🤖 "AI is going to be hugely impactful for disabled people in the workplace because it will hopefully make accessibility mainstream and available to everyone, just like Apple did with the iPhone or Amazon with Alexa devices," says Robbie Crow, a workplace disability inclusion expert. 🌎 The World Economic Forum 2023 report on AI and disability inclusion highlights that excluding people with disabilities can cost up to 7% of a country’s GDP. Implementing a disability-inclusive business strategy with assistive AI could result in 28% higher revenue and 30% higher profit margins for companies. 🦾 The benefits for everyone are clear, says Crow, with technologies that can simplify tasks and make consuming large amounts of material much easier. But AI is also having a wider impact for people who are blind, deaf or neurodivergent. 🖥️ "AI can produce descriptions for any images – graphs, images and infographics, etc – and it can even tell you what’s on the screen in real-time. That’s something blind people have always been missing out on unless they had human support," says Crow. 👀 However, companies must also be aware of AI's potential weaknesses, Crow adds. "AI in recruitment, for example, isn't yet ready to remove biases towards people who can't make eye contact, who make spelling mistakes in applications or who answer questions literally. AI will have a positive impact, but we need to be mindful of ethical AI and train it to remove inherent discrimination across the board." How else could AI make the workplace more accessible in 2025 and beyond? Weigh in using the hashtag #BigIdeas2025. And check out the rest of this year’s Big Ideas below. UK: https://lnkd.in/gP_88hj8 Europe: https://lnkd.in/BI25Europe ✍️ Neha Jain Kale and Jennifer Ryan Sources: World Economic Forum: https://lnkd.in/gttNFNJR
-
How can leaders transform their teams to be AI-first? It starts with mindset. An AI-first mindset means: Seeing AI as an opportunity, not a threat. Viewing AI as a tool to augment teams, not just automate tasks. Using AI to reimagine work, not just optimize work. As leaders, it’s on us to build this mindset within our teams. Here are 5 ways we do this at HubSpot: Use AI daily: Lead by example—trust grows when teams see leaders embrace AI themselves. I use it everyday and share very specific use cases with our company on how I use it. Now every leader is doing the same with their teams. The result is that we will have almost everyone in the company use AI daily by the end of year. Apply constraints: Give clear, focused challenges. We kept headcount flat in Support while growing the customer base by 20%+. Result - the team innovated with AI and over achieved the target. Smart constraints drive innovation. Establish tiger teams: Empower small, agile groups to experiment, innovate, and teach the organization. We have AI Tiger teams in every function - they share progress in Slack channels and there is so much energy with small groups experimenting and learning. Be a learn-it-all: Foster a culture of continuous learning. Share openly about successes and failures alike. We have dedicated 2 full days to learning and scaling with AI this quarter as a company - we have lined up great speakers, ways to experiment and gamified learning. Measure progress and share it: Measure which teams are completing learning modules, using AI everyday and share that openly. A little healthy competition goes a long way in driving AI-fluency. AI isn’t just a technology shift. It’s fundamentally reshaping how work gets done—and that requires shifting our mindset first. Leaders who embrace AI now will unlock creativity, performance, and impact. Are you building an AI-first mindset with your team? #Leadership #AI #Innovation #Mindset #FutureOfWork