In this the latest episode of Data Malarkey – the podcast about using data, smarter; the first episode of Season Ten, no less – Master Data Storyteller, Sam Knowles, welcomes Harriet Kingaby, co-chair of the Conscious Advertising Network (CAN). Harriet’s career has been shaped by two consistent concerns: how communication works, and whether it is helping or harming our world and its people. Trained in environmental science, she began in sustainability roles before moving into sustainability communications and, later, into the thornier intersection of advertising technology, the information ecosystem, and artificial intelligence.
Toxic information environments
Harriet describes a pivotal moment around 2017. A sustainability campaign she worked on began attracting what she now believes was bot-driven attention, including trolling that startled even an experienced social media manager. Around the same time, investigative reporting (including Carole Cadwalladr’s groundbreaking work) and the Cambridge Analytica scandal were bringing the mechanics of manipulation into public view. Harriet’s question became urgent: how can we solve climate change – or, indeed, any long-term collective challenge – if the information environment is systematically distorted?
The Conscious Advertising Network
CAN’s answer is practical and collaborative. It brings together around 200 organisations across brands, agencies, and civil society groups to “make advertising work for everybody”. Harriet calls it a “broad church” deliberately: advertisers understand commercial realities and industry constraints; civil society groups often have deep expertise in human rights impacts and the harms created by opaque online systems. CAN’s outputs are open source and funded by philanthropy, with the aim of speeding adoption across the industry rather than locking insight behind a paywall.
Content x distribution = influence
A recurrent theme in this episode of Data Malarkey is that it’s not just content that shapes what people believe; it is also distribution, the mechanisms for delivering context. Recommendation algorithms decide what we see, how often we see it, and from whom.
Harriet notes growing evidence that certain voices and topics can be deprioritised in ways that are neither visible nor easily contestable, including filters that deprioritise content from women, the LGBTQIA+ communities, and people of colour. This matters because frequency and messenger effects shape opinion as much as message content does, yet the systems governing both are rarely transparent.
Truth on the slide
Harriet also explores why public confidence in truth is faltering. Traditional news brands, for all their flaws, offered recognisable context and accountability. Today, information is democratic and diverse – often a genuine improvement – but it is also easier to game. Platforms are incentivised by engagement, and engagement frequently rewards outrage, fear, and novelty over truth. Harriet cited recent evidence from the Molly Rose Foundation which showed that harmful content can be served rapidly to vulnerable profiles, and that much of it is monetised. If advertising money flows to whatever holds attention, it ends up underwriting harm.
No-click, no fee
Then there is the “no-click internet”: AI summaries and chat interfaces deliver answers and syntheses of original material, but without sending those searching for information to the original sources. Harriet argues this shifts revenue away from content creators and concentrates power further inside a cabal number of increasingly massive and influential platforms. The risk is not just economic; it is epistemic. If fewer independent publishers survive, fewer perspectives survive with them.
Is it too late? Harriet is clear: fear without agency is pointless. Her optimism rests on tangible signals – including rising global attention on “information integrity” – and on a straightforward industry lever: advertisers’ money. Many AI tools are not fully monetised yet. Harriet believes this is exactly the moment for advertisers to ask better questions about guardrails, incentives, and accountability before they become embedded defaults.
Summing up
Harriet’s closing point is the simplest, and the most demanding: if a company selling you an AI system cannot explain it clearly, they may not understand it well enough either. In a world of self-learning systems, the time to insist on transparency is now.
—
The first draft of this blog was written by ChatGPT, using a transcript of the episode and an ever-refined prompt. It was then edited by real humans.