In conversation with Sonali Verma – a renowned media and technology expert with extensive experience in digital transformation and AI-driven journalism. As the lead of the INMA Generative AI Initiative and an advisor to the Harvard Shorenstein Centre and the World Economic Forum, she actively shapes the future of the media industry. She discusses the opportunities and challenges of generative AI, ethical considerations, and the transformation of journalism in an increasingly data-driven world.

In what ways do you integrate AI tools into your personal daily routine either in a work environment or in your private life?
Like most people reading or listening to this, I use AI wherever it can save me time—as long as I don’t have to worry about accuracy. For example, when I’m researching for a report, I don’t rely on AI because accuracy is crucial, and I know that AI sometimes fabricates information. In those cases, I go back to original source papers, dig deep, verify facts, and double-check everything.
But for other tasks, I find AI incredibly useful—especially for creativity and brainstorming. Take something as simple as planning dinner. If we’re having eight guests over, one person can’t have soy, and another is vegan, what’s a good menu that accommodates everyone while still including something my husband wants to cook? In situations like these, AI is great for generating ideas I might not have thought of myself.
I also love using AI for transcription and summaries. The summaries on Zoom calls, for instance, are not perfect, but they're pretty good. It’s a great shortcut that saves me from manually transcribing conversations.
Beyond that, there are all the ways we use AI without even realizing it—like when writing emails or newsletters. Programs like Gmail, Word, and Outlook suggest words or phrases, and I probably accept those suggestions more often than I realize. At this point, it doesn’t even feel like AI anymore—it’s just part of how we write and communicate.
Lately, I’ve almost completely switched from Google to Perplexity. I like that it answers my questions while also providing sources to explore further. Honestly, I can’t remember the last time I googled something—I’m on Perplexity all the time now.
What are eventual risks and/or threats that you see that come with the technological development?
Here’s the thing: Generative AI looks like magic. You type something in, and suddenly—boom!—it delivers a deep, thorough analysis, full of well-crafted sentences. It’s impressive. It feels like magic.
But the problem is: it makes stuff up. That’s the first risk—hallucinations. Then, there’s something even more fundamental to understand: AI isn’t thinking or analyzing the way we do. It’s just predicting the next word based on everything it has read on the internet.
And that brings us to another major risk: bias. AI absorbs the internet’s biases because it’s trained on them. And whose voices are the loudest online? Often, misogynistic, racist, or ideologically skewed perspectives. That means AI-generated content isn’t neutral—it’s shaped by who dominates online discourse.
Then, there’s the language problem. The internet is mostly dominated by six major languages. If your work happens to be in a less common language, it’s likely underrepresented—or not represented at all—in AI training data.
Beyond bias and inaccuracy, there’s another issue: ethics. Where does AI get its knowledge from? Was it stolen from a news company? Did they pay for it, or is it intellectual property theft?
And when we move beyond text to images, things get even murkier. Every AI-generated image is based on photographs taken by real people over time. Did those photographers get paid? Or did AI models just scrape and repurpose their work?
So, this raises a big question about the entire supply chain of generative AI. Where does this information come from? And—perhaps more importantly—why does AI present certain answers as the best ones?
Yes, newer reasoning models are more transparent, but they still don’t fully reveal why they selected certain information or where exactly it came from. And that’s the real challenge—it’s still a black box.
What are some interesting use cases for AI in your industry that you have already come across?
There are so many interesting use cases for AI—I could talk about them all day. Broadly, they fall into two categories: efficiency & productivity and new product development. But honestly, these two often go hand in hand.
Take the news media industry, where I work. AI is now embedded in every stage of the news creation process—worldwide. And that’s not an exaggeration.
For example, imagine a City Council meeting takes place. Once the official minutes are released, AI can scan the document, highlight key points, and flag important statements for a reporter. It might say:"Hey, this was said. That decision was made."
From there, AI can draft a rough version of a news story or help generate ideas about why this information matters. It acts as a support system for journalists, making the process faster and more efficient.
What would be a specific piece of information that you gained through working closely with AI and that you believe most people do not know?
One thing I’ve learned—that many people might not know—is what best practices for implementing AI actually look like. It’s still a new frontier, and generative AI only became mainstream about two years ago. Back then, many news organizations and companies were scrambling to figure out how to use it effectively.
So, what do I know? The first step in AI adoption is setting clear guidelines. You need to define upfront: What are we going to use AI for? What is absolutely not a use case for us?
Ideally, this process should be crowdsourced within the organization, gathering insights from different teams. Without well-defined boundaries, AI implementation can quickly become unstructured and ineffective.
Which changes - by the influence of artificial intelligence - could you identify in your field of work? What development would you wish for?
If I had a magic wand, I’d use it to fix some of the biggest problems news consumers face today—because, let’s be honest, we’re in a crisis of confidence and trust in news.
We’re also dealing with news fatigue and even news avoidance. And I’m sure the readers have felt this too—there’s just too much news, and most of it is bad. On top of that, it’s often presented in a way that makes it exhausting to consume.
If I could change one thing, it would be this: every news consumer would get access to trusted, high-quality journalism from credible sources. Not just some random person on YouTube, but news that comes with transparency—clear verification, a rigorous process, and reasons to trust the source.
In an ideal world, trust in news wouldn’t be a question mark—it would be built into how information is delivered. And each person would receive it in a way that actually makes sense for them.

Sonali Verma leads INMA's Generative AI Initiative, surfacing practical use cases at news brands and sharing her expertise with the INMA community through conferences, newsletters, webinars and workshops. She is an executive media consultant who worked at The Globe and Mail for 15 years, where she led a mind shift in getting journalists to understand how data and AI could make them more effective in attracting and retaining subscribers. She also collaborated with The Globe’s data science team to build Sophi.io, a machine-learning platform that is now used by media companies on five continents. She previously worked as a journalist at Reuters, CNBC and Bloomberg in Asia, North America and Europe. She was a visiting fellow at the Reuters Institute for the Study of Journalism at Oxford University.
This interview is part of PANTA Experts where we interview diverse experts in the media industry and beyond. Interviewer: Jan Kersling (PANTA RHAI).