The Individual and Societal Effects of AI-driven Personalisation in the Media

Never has technology permeated society so deeply. Nonetheless, our digital ecosystem seems to struggle in reconciling the politics of information and human rights. The consequences of disinformation and manipulation online have shun light on the cracks present in the stability of democracies, as well as the vulnerability of our information society. AI, particularly Machine Learning (ML), can create adaptive systems that can improve over time by analysing and recognising patterns in datasets, without being explicitly programmed to do so. AI-driven personalisation can be defined as the customisation of content to the individual user through collected data. Large tech companies started using personalisation in order to manage their collected data, and social media’s personalisation through active and passive collection of data is driven by a desire to profit from gains in targeted advertising.  Personalisation is much more than a marketing strategy however, and it has become an important tool in analysing, predicting, and even redirecting individual and group opinion dynamics. A research paper released by Chatham House has looked in depth into how AI-driven personalisation has moved on from social media to the wider information domain, including the Fourth Estate (the global press and news media).  This post will outline how the use of personalisation at such a large scale can have individual and societal implications that its commercial benefits might not counterbalance.

 

Implications for Individual Rights

AI-driven personalisation incentivises the collection and exploitation of large amounts of personal data and increases the risk of manipulation of individual users through the spread of disinformation. Recommending is a form of personalisation that uses filtering, ranking, and prioritising of content based on your previous online activity and preferences. This is different to targeting, where users are targeted with personalised content meant to have a specific impact on their behaviour. Recommending and targeting are both forms of personalisation that can have serious effects on our autonomy and agency

 

Algorithms can make it difficult to distinguish between offering, persuasion, and manipulation. UN Special Rapporteur Kaye, called for human rights audits in order to assess the effects of “AI-assisted curation” on citizen’s autonomy to govern themselves and make choices based on their own circumstances.  Because of AI-driven personalisation, our vulnerabilities are easier to assess and exploit. The use of our data is often also not transparent, which is at odds with the Charter of Fundamental Rights of the EU that gives people the right to data protection. 

 

AI-driven personalisation can also perpetuate discrimination and the exclusion of groups from information and opportunities. This is through the filtering out of minority voices, as well as the potential use of sensitive information (e.g. ethnicity, gender, sexual orientation, religious beliefs) to target certain groups, or exclude them from products, information, and services, or even offer them different prices. The use of facial recognition has also consistently come under fire by rights groups, particularly the use of these technologies by facebook

 

Critics have argued that the filtering of content can force users to only see and be exposed to specific content, creating echo chambers, and causing an increase in polarisation. This has however been widely contested, with researchers at DeepMind stating that there are many measures one can take to counteract any echo chamber, and others claiming that they might not exist altogether. It is however difficult to measure the immediate effect of some AI-driven personalisation in the media on users, as the effects are more likely to accumulate over time. 

 

 

Societal and Political Implication 

The effects of AI-driven personalisation also extend beyond the individual to wider society and collective rights. The use of algorithms to support personalisation is extending beyond the world of tech companies and social media, into the Fourth Estate. The New York Times, the Washington Post, the BBC, and the Sunday Times – just to name a few – are all heavily investing in AI and ML for personalisation, in order to show audiences what is “more interesting to them”. Even the European Commission has invested almost €4 million on personalisation. The question thus arises of whether algorithmic optimisation  can actual serve the public interest. This is however put into question by various academics, claiming that the deployment of AI in news production will just lead to more output, rather than a higher quality. 

 

Unregulated personalisation can have deep political effects, threatening the stability and the legitimacy of the political system. The fear of echo chambers extends to the collective, with democracies fearing the decline of deliberation and reflexivity due to a less-informed citizenry. Exposure to a wide array of sources of information and news is considered a source of greater social cohesion, leading to better understanding and the fostering of communities. On the other hand, AI-driven personalisation risks eroding community solidarity by targeting voters with issues that only affect them, enabling their peer group while disempowering others. This applies equally to social media as it does to the Fourth Estate: giving too much importance to the preferences of the audience puts the autonomy of journalists at risk, and reduces their ability to serve the public interest.

 

Russia’s use of disinformation campaign on Facebook and Instagram to influence political debate in western countries, particularly during the 2016 US presidential election, is an example of the abuse of such individualisation of citizens for  political goals. By targeting individuals directly, Russian activity evaded detection and played on the known pre-existing grievances of voters. AI-driven targeting and personalisation has also been criticised for increasing polarisation through the use of emotional content to increase engagement. 

 

In summary

AI-driven personalisation in communication and in the media is worth analysing and understanding. It can transform the way in which individuals relate to society, but can also impede on democratic processes by limiting or influencing the information that citizens, particularly specific groups, are exposed to. 

 

AI-driven personalisation, like many other technologies, has effects on both the digital media, but also journalism in the Fourth Estate. Not only do journalists now have metrics on what their audiences want, which affects the dynamics between honest journalism and the commercial goals of the organisations they work for, but AI-driven technology can also have negative effects on the anonymity of sources. 

 

It is important to state however that AI technologies do not create polarisation and discrimination themselves. They can however embed them more deeply into society  by normalising them. Although global governance and legal frameworks for AI are welcome and necessary, AI-driven personalisation might be an issue that needs to be addressed on an individual, case-specific basis. 

 

Written by Zoe Caramitsou-Tzira.

LEARN MORE ABOUT AI

AI Readiness: will AI accentuate global inequality?
What is AI Readiness?   AI Readiness is the extent to which any organisation is prepared to take advantage of AI. In 2017, Oxford Insights published the first Government AI Readiness Index to...
AI in Business and Finance: The Latest Addition To Creative Destruction
Joseph Schumpeter’s theory of creative destruction was first thought up in the mid-20th century. It was a theory that looked at how the continuous creation of technology effectively ‘destroys’...
Checklists and frameworks for ethical AI design in business
While the top ethical issues in AI have been identified and are fairly well-known, we often do not know what this means on a technical level. Our ability to think through the consequences needs to...
Manipulation and Disinformation: AI-driven Personalisation in the Media
The Individual and Societal Effects of AI-driven Personalisation in the MediaNever has technology permeated society so deeply. Nonetheless, our digital ecosystem seems to struggle in reconciling the...
The Black Box Problem: accountability in the age of the adolescent algorithm
As algorithms mature, existing legal models begin to show their age. Autonomous vehicles, or AVs, have become a topic of national conversation that generate equal parts optimism and pessimism about...