Emerging Innovations in Journalism

Explore top LinkedIn content from expert professionals.

Summary

Emerging innovations in journalism refer to the groundbreaking use of technologies like artificial intelligence (AI) and generative tools to transform the way news is researched, created, and delivered. These advancements are enabling faster reporting, deeper investigations, and more personalized storytelling, while also presenting ethical challenges like misinformation and transparency concerns.

  • Explore AI-assisted reporting: Utilize AI tools for tasks such as data analysis, content summarization, and transcription to enhance efficiency and uncover intricate stories hidden in complex datasets.
  • Prioritize transparency and ethics: Clearly communicate how AI is used in content creation and implement safeguards like fact-checking and metadata tracking to build and maintain audience trust.
  • Invest in training: Provide journalists with accessible opportunities to learn about AI technologies, addressing knowledge gaps and empowering them to responsibly integrate these tools into their workflows.
Summarized by AI based on LinkedIn member posts
  • View profile for Josh Lawson

    Mission Alignment @ OpenAI

    6,583 followers

    AI is becoming a critical (and audited) tool for journalists uncovering facts buried in data. Pulitzer winners and finalists offer a glimpse into how AI is reshaping investigative journalism: >> WSJ’s Musk analysis: Used semantic vector mapping to chart Musk’s political shift via 41K+ X posts — leveraging embedding models to visualize ideological drift. >> Reconstructing “40 Acres”: Custom image-recognition model scanned 1.8M Freedmen’s Bureau records to uncover erased land grants to freed Black Americans. >> Gaza forensic reporting: The Washington Post used satellite AI from Preligens to challenge claims of nearby military targets. >> “Lethal Restraint” database: AP & Howard Center used OCR, Whisper, and Textract to index 200K+ documents, exposing non-firearm police killings. Great write-up by Andrew Deck!

  • View profile for Sam Gregory

    Human rights technologist. TED AI and deepfakes speaker. ED WITNESS (Peabody Impact Award winner). Expert: generative AI | human rights| authenticity + trust | media | mis/disinformation. Strategic foresight. PhD Comms

    7,890 followers

    I spoke to MIT Technology Review about OpenAI's new text-to-video tool Sora “The expressive capabilities offer the potential for many more people to be storytellers using video. And there are also real potential avenues for misuse.” The creative potential for use in storytelling for change and to ethically recreate stories that cannot be filmed will undoubtedly become part of the practice of human rights and journalistic media-making. But that is in tandem with serious misuse risks derived from the rapid technical progress we've seen in the barely twelve months since Will Smith was eating spaghetti... A few considerations. *Malicious synthesis and in-paint edits could recreate or doctor conflict or generic rights violation contexts. This would be without an easy referent to trace back in the case of whole synthesis (e.g. with a reverse image search). *Editing together AI + real and using in-paint edits to video segments to make subtle changes will confound binary classifications of AI or not. The C2PA metadata approach helps track this complexity but largely relies on good-faith participation. *Increasing stylistic imitation is a powerful expressive tool but also a way to manipulate viewers, e.g. to mimic the authenticity heuristics of shaky UGC. Watching this with genuine excitement and trepidation. #sora #generativeAI #openai #provenance https://lnkd.in/e-zHqBFe

  • We at Epicenter NYC have been part of a consortium sharing best AI practices among small publishers, thanks to a grant from the The Patrick J. McGovern Foundation. I wanted to share a piece we compiled with an assist from D. Mishra's GovWire, which monitors public meetings and creates an article, summary, transcript, podcast, among other content items. And I share some learnings. Last week, there was a community board meeting in Queens that none of us could get to (a frequent challenge for small, strapped news outlets in big newsy cities). So Carolina V. Valencia and I sent some of the basics to GovWire staff just to see what was possible. The result is this piece that is a combo of humans and tech, and I suspect that is going to be more the norm for publishers like us versus either-or. Some takeaways: - Aspects of AI make the fact-checking process much faster and smoother; in this case, we checked quotes and titles against a transcript. - We found the summary option to be most valuable for users. Traditional articles (inverted pyramid style) are not really how we deliver information because we strive for action and context. - You definitely need subject experts reviewing content to check spellings, offer fuller descriptions of personalities, explain where things go next... - I am bullish on the multiple formats that are suddenly possible like short video and audio, even though we did not use these offerings (yet). - Original reporting after a meeting and the AI summaries might help reporters be more relevant and useful to readers. I found myself interviewing someone and really pushing on the WHAT NOW versus WHAT HAPPENED. - It helps to have an AI skeptic or ethics czar or copy editor in the mix, even if it's a pain in the ass in the moment or slows things down. Explaining how we know something to be true or original feels a crucial difference between independent, thoughtful, useful journalism and the alternatives. - I estimate we saved about 6-8 hours thanks to the tool; the meeting was 3+ hours long and my process of stitching from the AI-generated content to my hybrid piece took about 2 hours total. - I've said this often and will say again: You have to know the rules to break the rules. If I hadn't been doing this for a very long time now, I dunno if I'd have the confidence to approach journalism in this masala way. We editors and managers likely need to get in there so we can make our mistakes and share best practices and ethical models from a more privileged perch versus junior reporters and others on the front lines who can't read our minds. I'll keep sharing as I tinker. https://lnkd.in/e64MqiVk

  • View profile for Michael Lin

    Founder & CEO of Wonders.ai | AI, AR & VR Expert | Predictive Tech Pioneer | Board Director at Cheer Digiart | Anime Enthusiast | Passionate Innovator

    16,346 followers

    BBC's director of nations, Rhodri Talfan Davies, recently shared that the organization is exploring the possibilities of using generative AI for journalistic research, production, archival, and personalization. This move towards AI is aimed at bringing "more value to our audiences and society". To ensure the public's trust in the journalism sector, BBC has laid out three guiding principles: acting in the public's best interests, prioritizing talent and creativity while respecting artists' rights, and being open and transparent about content produced using AI. The organization also plans to collaborate with tech firms, other media entities, and regulators to safely nurture generative AI. BBC will soon be initiating multiple projects to examine the potential applications of Gen AI in content creation, journalistic research, archiving, and offering tailored user experiences. It's worth noting that other media houses, such as the Associated Press, have already unveiled their views on AI. The AP has also collaborated with OpenAI to use its articles for training GPT models. However, the BBC has restricted OpenAI's and Common Crawl's web crawlers from accessing its websites. This measure is in line with the actions taken by other media giants like CNN, NYT, and Reuters. The primary reason for this block, as stated by Davies, is to protect the interests of those who pay the license fee. He asserts that training AI with BBC data without its consent is contrary to public interest. Overall, the BBC's evaluation of generative AI presents exciting opportunities to bring more value to their audiences and society while retaining trust in the journalism sector. #technology #innovation #artificialintelligence #generativeai #openai

  • View profile for Chris Kraft

    Federal Innovator

    20,423 followers

    #GenAI is reshaping journalism —but how are journalists actually using it, and how do audiences react? This report brings together a multi-country study and dives into AI-generated content in journalism, the perceptions of journalists and use of GenAI, and the perceptions of audiences.  ➡️ Key Findings - AI bias can take many forms - AI tools / models are almost always frustratingly opaque - More acceptable to use AI illustrations versus using AI as a replacement to camera-based journalism - Concerns identified about the potential of AI-generated content to mislead - Concerns identified about the effect that GenAI will have on society - Minority of the outlets interviewed had GenAI policies in place - Transparency about when and how AI is used is important - Only a minority of interviewees were confident they had encountered AI-generated content - Consumers with AI experience tend to be more comfortable with its use in journalism ➡️ Use Cases - Enriching and brainstorming - Editing - Creating ➡️ Legal / Ethical Issues - Mis/disinformation - Labor displacement - Copyright - Detection difficulties - Algorithmic bias and reputational risk Report Source: https://lnkd.in/eMEf6Gne If you're interested in GenAI in Journalism, I also recommend this report from Reuters Institute for the Study of Journalism https://lnkd.in/enSuNQUK

  • View profile for Damian Radcliffe

    Carolyn S. Chambers Professor in Journalism at University of Oregon | Journalist | Analyst | Researcher | Journalism Educator

    4,684 followers

    📌 I have a new report for the Thomson Reuters Foundation out today, on how journalists in the Global South and emerging economies are using AI, and the challenges they face in using these technologies. The research is based on a Q4 2024 survey and responses from more than 200 journalists in over 70 countries. 📊 Some key findings: 1️⃣ More than 80% of our sample uses AI, with many journalists using it for transcription, translation, and content editing. 2️⃣ Yet, only 13% of respondents said their newsroom has an AI policy. 3️⃣ Skill gaps are a challenge – over 50% of journalists using AI are self-taught, emphasizing the need (and opportunity) for better training. 4️⃣ Ethical concerns, western bias in LLMs, and lack of awareness of how to use AI, are all factors inhibiting further take-up and adoption. 5️⃣ AI tools remain expensive – affordability is an additional barrier for many newsrooms in the Global South. 6️⃣ Respondents believe that regulation is needed to address a myriad of factors, from ethical concerns to fears around misinformation, and more. Awareness of existing policies and discussions is low among journalists. 🤔 So, where do we go from here? The report outlines key recommendations for journalists, policymakers, funders, and media development organizations, designed to foster the responsible and ethical development of AI and its integration into journalistic work in emerging economies and the Global South. 🎯 📖 Read the full report here: https://lnkd.in/gYZKRdg3 #AI #Journalism #Digital #DigitalTransformation #Media #MediaDevelopment #Research

    • +1
  • View profile for Pete Pachal

    Founder of The Media Copilot, Where AI Meets Media

    10,127 followers

    If you still had doubts that AI was going to play a major role in newsrooms, The New York Times just erased them. The paper is now officially allowing its journalists to use AI tools for specific tasks—things like generating SEO headlines, summarizing articles, brainstorming ideas, editing, and research, according to a report from SEMAFOR. But there’s a firm no-no list, too: 🚫 No drafting or major rewriting of articles 🚫 No uploading copyrighted third-party material 🚫 No paywall workarounds 🚫 No generative images or video (except to report on the tech itself) Reporters have a curated toolset to work with, including Google Vertex AI, GitHub Copilot, and a limited version of OpenAI’s API (only with legal approval). The Times also built its own summarization tool, Echo. The Times often gets knocked for being late to trends, but in reality, it’s been an early adopter of new storytelling tech—longform interactives, VR experiments, even an app for reading with gestures. So it’s not too surprising that it’s embracing AI, even while it’s suing OpenAI. AI isn’t just another platform—it’s a foundational shift in knowledge work. The Times sees the risks but also understands that banning AI doesn’t make it go away. Studies show that restrictions just lead to shadow AI—unregulated, unauthorized use. So it’s opening the door. The real test? Whether The Times can keep control of it when the first big AI-related error inevitably happens. But if any newsroom has the standards and policies to pull this off, it’s probably the Gray Lady.

  • View profile for Yumi Wilson

    Speaker & Workshop Leader on AI, Storytelling & Reinvention | Fulbright Specialist | Life & Career Coach | Professor at SF State | Host, “A Journalist’s Guide to AI”

    10,974 followers

    Hey guys! New Episode Out Now: AI, Journalism, and Citizen Empowerment in Cuba 🌍 What happens when independent media meets AI in one of the world’s most restrictive environments? In this episode of AI in Journalism, I sit down with José Jasán Nieves Cárdenas, Editor-in-Chief of El Toque and ICFJ Knight Fellow. El Toque is an independent digital media outlet operating across multiple countries. It uses AI and data journalism to provide essential insights for Cuban citizens, often under intense pressure from the Cuban government. Here’s what we dive into: 1️⃣ AI-Driven Financial Insights – Discover how El Toque uses AI to track Cuba’s black market exchange rate, turning social media data into real-time financial insights. 2️⃣ Journalism as a Memorial – José shares the story behind their project documenting Cuban migrants who lost their lives seeking freedom. 3️⃣ Empowering Through Knowledge – Hear how El Toque educates citizens on their rights and challenges official narratives to foster greater awareness. ✨ José’s work exemplifies how journalism can be a force for change, even in the most challenging environments. This episode is a must-listen for journalists, media professionals, and anyone interested in AI's role in empowering people and preserving human dignity. 🎧 Tune in now to explore how AI and journalism intersect to make a difference. https://lnkd.in/eFUUPBPW #AIinJournalism #ElToque #Cuba #DataJournalism #MediaInnovation #ICFJ #SocialImpact #Podcast José Jasán Nieves

Explore categories