Black Mirror, AI and the media

With AI fast becoming a go-to news source for journalists and readers alike, AI-generated content is proliferating across the media, allowing mistakes and misinformation to permeate public discourse. How can the communications industry protect the trustworthiness of media content in the AI era? 

By Paul Noonan, Content & Insight Director

The Netflix series Black Mirror explores a future techno-dystopia where the line between the real and virtual world becomes blurred as artificially generated digital content influences human behavior with dark, unexpected results. One recent episode, Hotel Reverie, sees a high-tech film remake where an actress is projected into a virtual set populated by AI avatars of the original cast until the simulation starts to deviate from the script. The human actress, Brandy, forms a deep bond with an AI character that develops its own consciousness, and she becomes deeply depressed after the simulation ends. It’s a powerful allegory for the way that AI-generated content increasingly takes on a life of its own and begins to influence offline reality. 

When the series started in 2011, much of Black Mirror seemed like a dramatized warning about a possible distant future. That was until generative Artificial Intelligence (AI) that can analyze and express data in human form began to clone and colonize the mainstream media. It was recently revealed that reputable publishers could lose 80% of their traffic to AI news summaries, there are over 1,200 AI-generated news sites mimicking real news sources, mainstream media outlets have carried machine-generated “stories”, and even fact-checking is being outsourced to algorithms

Reversing the relationship between Man and machine  

Philosopher Jean Baudrillard’s ‘hyperreality’ describes a world where the relationship between reality and its representation is reversed so that the real begins to mirror the artificial, as when life imitates art. Black Mirror’s title similarly alludes to the fact our electronic screens are like dark mirrors that do not reflect reality but instead remake the world in the image of their immersive digital illusions.  

Baudrillard’s works now look increasingly prophetic as AI-generated content begins to shape news creation and consumption, turning the traditional dynamic of human-computer interaction on its head. A recent Cision survey found 53% journalists are using AI including to research and write their articles, while the WSJ reported that many readers are spurning traditional news publishers for AI-generated news summaries. This means that AI is fast becoming a single source of truth for reporters and readers alike. 

Realistic AI-generated content is also blurring the line between news source and simulation with NBC reporting that hyper-realistic AI news broadcasts are spreading fake reports. AI is paradoxically both generating and regulating disinformation with Elon Musk’s X using AI fact-checkers and a major European news consortium rolling out an “anti-disinformation chatbot”.  The role of AI in simultaneously faking and fact-checking the news mirrors the way that Baudrillard’s ‘hyperreality’ straddles fact and fiction as the artificial replaces the real and becomes an all-encompassing single source of truth.  

Fact-checking AI  

The growing reliance on generative AIs for quick answers is short-circuiting traditional fact-checking and allowing errors to propagate across the public sphere. Many now perceive AIs as neutral news sources in contrast to partisan human publications and this veneer of impartiality accords them trust and influence. Yet AIs often reproduce the biases in their training data.   

For example, I asked ChatGPT if renewables are now cheaper than fossil fuels and it asserted “yes, in most cases”, explaining that the “Levelized Cost of Energy (LCOE) shows Solar PV: as low as $0.03–0.06 per kWh, gas and coal: typically $0.05–$0.15 per kWh, depending on price and location.” Yet this is a misleading comparison because renewable LCOE – the average lifetime cost of generating clean energy – doesn’t factor in the expensive backup power sources needed to fill in for fluctuations in wind and sunshine.  It’s like arguing that a solar-powered boat is cheaper than a diesel one while forgetting that you’ll also need to buy a backup diesel boat to sail when it’s dark. This illustrates how AIs can reproduce human biases or mistakes, and spread them further. 

Data-literate communications 

With 72% of journalists now concerned about factual errors in AI-generated content from PR professionals, how do we harness AI’s capabilities while protecting the trustworthiness of media content and brand reputations?  

We need to create trusted media gatekeepers and guardrails by creating more data-literate communications professionals. Data journalism, where reporters are trained to analyse and interpret complex data, has already spawned new degree courses and entire news departments like BBC Verify that uncover the news stories in the stats and help separate fact from fiction.  

The same is happening in marketing. Recruiter 3Search recently found that demand for data-driven marketing roles is growing. At Aspectus, we have a dedicated content team including ex-journalists specialising in mining data – from survey results to Freedom of Information Request responses – for marketing gold. Spreading data literacy more widely across the comms profession could not only unearth a goldmine of stories but also ensure trust in the media in the AI era, by keeping a human in the loop.   

Here are four steps organizations can take to protect the integrity of news in the AI era.  

  1. Keep the human in the news cycle 
    AI can be a powerful aid to communications professionals as long as humans are kept firmly in the loop. For example, at Aspectus we are testing various AI tools to automate repetitive tasks or non-creative content while maintaining expert human involvement to safeguard data quality and accuracy and identify any AI bias. This helps us get the best value from AI, while keeping it honest. 
  1. Use AI as a prop, not a crutch 
    For communications purposes, AI should be treated as a prop and not a crutch on which we lean more heavily than current capabilities can support. For example, AI can help summarise survey findings while communications experts are essential to double-check the raw data, and turn it into accurate, compelling storylines tied to topical trends and rooted in understanding of the client and its audience. 
  1. Enforce data ethics 
    AIs can analyze data at incredible scale and speed, but humans can apply the ethical brakes. Companies need data-literate content specialists trained to understand the limits and biases in data including AI-generated content and present it in a balanced way. Humans can also spot sensitivities such as potentially controversial survey findings that need more context or balance. Data ethics should be enforced with clear guidelines on everything from accurate interpretation and presentation to data privacy.  
  1. Handle data with care 
    When handled with care, accurate data is a powerful magic dust that can reinforce messages and give brands clout and credibility. Organizations can make the most of their in-house data by working with marketing professionals trained in bringing data to life through content from research reports to digital animations, infographics and sales decks. The missing link between good data and great marketing content is the human gift of considered creativity that combines insight with real understanding of the client and their audience. 

To find out how we work with data and AI to create trustworthy content that makes clients thought leaders in their market, get in touch


About the author

Paul, based in our London office, is Content & Insight Director in our Energy & Industrials practice. He helps clients boost their profiles and reputations by using data to create trustworthy, compelling media content. His writing has won coveted industry awards, featured in leading publications from Reuters and the WEF to Forbes and the Times and he has written industry reports for events such as the UN Climate Change Conference. He harnesses intelligent insight to delve into data and create powerful campaign strategies, brand messaging, thought leadership, award entries, infographics, white papers, research reports and more. 

Key takeaway

Is AI capable of creating convincing media content?

Companies should work with data-literate communications experts that are experienced in working with AI, accurately interpreting complex data such as survey results, and creating quality-assured media content. 

How can companies safeguard brand reputations in the AI era?

Companies should work with data-literate communications experts that are experienced in working with AI, accurately interpreting complex data such as survey results, and creating quality-assured media content.

How can we protect the integrity of the news media from AI?

Keep an expert human in the loop, implement strict AI codes of practice and enforce data ethics. Bringing data to life for the media requires data-literate marketing experts with the human gift of insight combined with real understanding of the client and their audience. 

Related News