We decide where AI takes us: DataKind's Lauren Woodman urges trust, transparency, governance at Davos
The idea that AI could one day rival or even surpass human intelligence has captured the world’s imagination. Yet Woodman cautions against treating AI as an unstoppable force of nature.

- Jan 20, 2026,
- Updated Jan 20, 2026 6:17 PM IST
At the World Economic Forum, as global leaders debate whether artificial intelligence (AI) is the most powerful technology humanity has ever built, Lauren Woodman, CEO of DataKind, offered a grounded reminder: AI’s trajectory is not predetermined. It is being shaped, line by line, by human decisions.
“AI is changing so rapidly,” Woodman told Business Today on the sidelines of the World Economic Forum in Davos. “It is remarkable how quickly it is evolving and being expanded into all the places that we're using it.” But for Woodman, the real story is not about superintelligence timelines or compute races. It is about governance, accountability and trust.
The idea that AI could one day rival or even surpass human intelligence has captured the world’s imagination. Yet Woodman cautions against treating AI as an unstoppable force of nature. “We get to decide where AI takes us,” she said.
“We write the code. We decide where systems get deployed. We should be thoughtful about how far we push, how fast we push, and what rules we put in place.”
At the heart of her argument is a simple principle: technology is neutral; its impact depends on how societies choose to use it.
“If AI is being used to determine my eligibility for a loan or admission to a school, I want transparency in what rules are in place and what governs that AI,” she said. “That’s what builds trust.”
In an era where authoritarian systems can deploy AI at speed and scale, Woodman argues that democratic societies must respond not by slowing down innovation, but by embedding responsibility into it. Guardrails, regulations and internal company governance must evolve alongside the technology itself.
“There are legitimate concerns about where AI could take us,” she said. “But that future is not written yet.”
On jobs, Woodman struck a pragmatic tone. AI will eliminate some roles, but it will also create new ones, from model maintenance to prompt engineering. The real challenge, she said, is reskilling at scale.
“New jobs will be created. But that requires investment in people.”
For Woodman, the biggest mistake governments and companies can make is to treat AI as merely a technical problem. “Data is plentiful,” she said. “What’s hard is asking the right questions.”
Watch the full interview here
At the World Economic Forum, as global leaders debate whether artificial intelligence (AI) is the most powerful technology humanity has ever built, Lauren Woodman, CEO of DataKind, offered a grounded reminder: AI’s trajectory is not predetermined. It is being shaped, line by line, by human decisions.
“AI is changing so rapidly,” Woodman told Business Today on the sidelines of the World Economic Forum in Davos. “It is remarkable how quickly it is evolving and being expanded into all the places that we're using it.” But for Woodman, the real story is not about superintelligence timelines or compute races. It is about governance, accountability and trust.
The idea that AI could one day rival or even surpass human intelligence has captured the world’s imagination. Yet Woodman cautions against treating AI as an unstoppable force of nature. “We get to decide where AI takes us,” she said.
“We write the code. We decide where systems get deployed. We should be thoughtful about how far we push, how fast we push, and what rules we put in place.”
At the heart of her argument is a simple principle: technology is neutral; its impact depends on how societies choose to use it.
“If AI is being used to determine my eligibility for a loan or admission to a school, I want transparency in what rules are in place and what governs that AI,” she said. “That’s what builds trust.”
In an era where authoritarian systems can deploy AI at speed and scale, Woodman argues that democratic societies must respond not by slowing down innovation, but by embedding responsibility into it. Guardrails, regulations and internal company governance must evolve alongside the technology itself.
“There are legitimate concerns about where AI could take us,” she said. “But that future is not written yet.”
On jobs, Woodman struck a pragmatic tone. AI will eliminate some roles, but it will also create new ones, from model maintenance to prompt engineering. The real challenge, she said, is reskilling at scale.
“New jobs will be created. But that requires investment in people.”
For Woodman, the biggest mistake governments and companies can make is to treat AI as merely a technical problem. “Data is plentiful,” she said. “What’s hard is asking the right questions.”
Watch the full interview here
