I was interviewed at length for today's The Wall Street Journal article on what exactly went so wrong with Grok. Here's what's critical for any leader considering enterprise-grade AI: Great article by Steve Rosenbush breaking down exactly how AI safety can fail, and why raw capability isn't everything. AI tools need to be trusted by enterprises, by parents, by all of us. Especially as we enter the age of agents, we're looking at tools that won't just answer offensively, they'll take action as well. That's when things really get out of hand. ++++++++++ WHAT WENT WRONG? From the article: "So while the risk isn't unique to Grok, Grok's design choices, real-time access to a chaotic source, combined with reduced internal safeguards, made it much more vulnerable," Grennan said. In other words, this was avoidable. Grok was set up to be "extremely skeptical" and not trust mainstream sources. But when it searched the internet for answers, it couldn't tell the difference between legitimate information and harmful/offensive content like the "MechaHitler" meme. It treated everything it found online as equally trustworthy. This highlights a broader issue: Not all LLMs are created equal, because getting guardrails right is hard. Most leading chatbots (by OpenAI, Google, Microsoft, Anthropic) do NOT have real-time access to social media precisely because of these risks, and they use filtering systems to screen content before the model ever sees it. +++++++++++ WHAT DO LEADERS NEED TO KNOW? 1. Ask about prompt hierarchies in vendor evaluations. Your AI provider should clearly explain how they prioritize different sources of information. System prompts (core safety rules) must override everything else, especially content pulled from the internet. If they can't explain this clearly, that's a red flag. 2. Demand transparency on access controls. Understand exactly what your AI system can read versus what it can actually do. Insist on read-only access for sensitive data and require human approval for any actions that could impact your business operations. 3. Don't outsource responsibility entirely. While you leaders aren't building the AI yourselves, you still own the risk. Establish clear governance around data quality, ongoing monitoring, and incident response. Ask hard questions about training data sources and ongoing safety measures. Most importantly? Get fluent. If you understand how LLMs work, even at a basic level, these incidents will be easier to guard against. Thanks again to Steve Rosenbush for the great article! Link to article in the comments! +++++++++ UPSKILL YOUR ORGANIZATION: When your organization is ready to create an AI-powered culture—not just add tools—AI Mindset can help. We drive behavioral transformation at scale through a powerful new digital course and enterprise partnership. DM me, or check out our website.
CEO Perspectives on AI Risks
Explore top LinkedIn content from expert professionals.
Summary
CEOs are increasingly focused on the risks of artificial intelligence (AI) as it becomes a larger part of business operations. From concerns about data security and bias to the challenge of ensuring ethical implementation, leaders are emphasizing the need for transparency, governance, and strategic oversight in navigating AI advancements.
- Focus on safety measures: Ensure that your AI systems have clear safeguards, such as filtering inappropriate content and enforcing system prompts that prioritize safety over raw data access.
- Own the responsibility: As a leader, actively engage in understanding AI risks by setting governance policies, monitoring systems, and gaining fluency in basic AI concepts to make informed decisions.
- Empower and prepare teams: Upskill employees to work with AI effectively by addressing concerns, co-creating use cases, and building trust for smoother integration into workflows.
-
-
Traditional ML completely transformed media and advertising in the last decade; the broad applicability of generative AI will bring about even greater change at a faster pace to every industry and type of work. Here are 7 takeaways from my CNBC AI panel at Davos earlier this year with Emma Crosby, Vladimir Lukic, and Rishi Khosla: • For AI efforts to succeed, it needs to be a CEO/board priority. Leaders need to gain firsthand experience using AI and focus on high-impact use cases that solve real business pain points and opportunities. • The hardest and most important aspect of successful AI deployments is enlisting and upskilling employees. To get buy-in, crowdsource or co-create use cases with frontline employees to address their burning pain points, amplify success stories from peers, and provide employees with a way to learn and experiment with AI securely. • We expect 2024 to be a big year for AI regulation and governance frameworks to emerge globally. Productive dialogue is happening between leaders in business, government, and academia which has resulted in meaningful legislation including the EU AI Act and White House Executive Order on AI. • In the next 12 months, we expect to see enterprise adoption take off and real business impact from AI projects, though the truly transformative effects are likely still 5+ years away. This will be a year of learning what works and defining constraints. • The pace of change is unprecedented. To adapt, software development cycles at companies like Salesforce have accelerated from our traditional three product releases a year to now our AI engineering team shipping every 2-3 weeks. • The major risks of AI include data privacy, data security, bias in training data, concentration of power among a few big tech players, and business model disruption. • To mitigate risks, companies are taking steps like establishing responsible AI teams, building domain-specific models with trusted data lineage, and putting in place enterprise governance spanning technology, acceptable use policies, and employee training. While we are excited about AI's potential, much thoughtful work ahead remains to deploy it responsibly in ways that benefit workers, businesses, and all of society. An empowered workforce and smart regulation will be key enablers. Full recording: https://lnkd.in/g2iT9J6j
The Future of Trusted AI with CNBC & Clara Shih at Davos 2024 | Salesforce
https://www.youtube.com/
-
As CEO of LaneTerralever (LT), Chris Johnson views AI as a tool augmenting human roles, not replacing them. Chris observes a chasm in the business world - a divide between those who harness AI with clear intent and those who remain oblivious to its sweeping impact. He forewarns of potential unemployment for individuals who shy away from embracing AI and its learning curve. He recognizes AI's benefits but also stresses the need for strategic oversight due to concerns about explainability and accuracy. Discussing AI in hiring and employee resistance, Chris underlines the importance of change management strategies for successful AI integration. He also touches on the challenges of trust and authenticity in an AI-driven world and explores how AI can enhance skills in sales, emphasizing its role in complementing human abilities. https://bit.ly/TLP-395 #AI #aiadoption #aileadership
-
The most dangerous AI risk isn't the tech. It's the psychology of the leaders behind it. Just had a CEO tell me: "Eva, I'm afraid to make the wrong call on AI." Here's what I've learned leading AI transformations in federal agencies: ✅ Fear doesn't make you weak → It makes you human → It shows you care about impact → It means you're pushing boundaries ✅ Confidence isn't about knowing everything → It's about being honest on what you don't know → It's leading despite uncertainty → It's staying curious when others panic ✅ Leadership in the AI age requires: → Less pretending, more transparency → Less control, more enablement → Less certainty, more adaptability ❗ Remember: You don't need to be the smartest in AI. You need to be the bravest in trying. The leaders who will thrive - aren't the ones who never doubt. They're the ones who act anyway. What's your biggest fear about leading in the AI age? 🔔 Follow Eva Karnaukh for AI, Voice & Dialogue ➕ Subscribe: https://lnkd.in/ewZTxFcE