Three consumer protection priorities in the time of AI

17 January 2024

How do you know if this article was written by artificial intelligence (AI)? 

It wasn't, but should you have the right to know?

The case for transparency and consumer information in the age of generative AI grows stronger by the day. The market is ever more concentrated, with a few companies monopolising data ownership and many of the models. Looking ahead, we see concerns about competition intensifying, and we're watching closely from a consumer rights point of view, as major markets pursue their focus on antitrust.

But this much is clear, generative AI will reshape much of our lives - extending to our laws, norms and values - making transparency essential. That means traditional consumer protection needs to be rethought to keep pace with the latest developments. 

The United Nations Guidelines for Consumer Protection emphasise the importance of transparency when it comes to providing people with the information they need to make informed choices, and enabling authorities to establish and enforce rules. 

Work has already started to improve transparency in AI. To ensure people are the true beneficiaries of this new technology, effective consumer protection needs to be built in at three key stages:

1. Construction 

Consumers have real worries about the way that AI is being built and how data is being incorporated, according to a review from the US Federal Trade Commission. Many generative AI models need large data sets for training and learning. We need to question how AI models are built and maintained – and whether this has been done in a way that is fair to consumers to start with.

For example, is the data used to train an AI model collected lawfully and with people’s consent? Is the human labour that labels and categorises that data ethical? And are the environmental resources involved responsibly managed? Developers should be transparent about what it has taken to create a tool that consumers use, in the same way that product labelling helps people understand what goes into their food, textiles or medicine.

2. Distribution

Once an AI model has been built, it must be deployed in a consumer-first way.

Open versus closed-source development has emerged as a key debate. With open models, an application’s source code is open to the public for anyone to use, while a closed model is kept private and proprietary.

There are arguments in favour of either approach, and it’s exciting to see new tools become available to society. But to properly protect consumers, we need to know what the impact on society is once an AI model is unleashed.

Have the developers and deployers of these products considered or disclosed the risks they might present? Do they allow external parties – such as researchers or enforcement agencies – to independently verify those claims? And in the case of open models, are there rules around who can build on that code and what they are allowed to do with it?

We know, for example, that open generative AI models have already been used to create non-consensual sexual imagery. The Norwegian Consumer Council has detailed the extensive harm the technology can create. In particular, it has the potential to unleash a new era of mis- and disinformation and to supercharge bad actors with scams. It could also make cyber deception harder to spot: people can only identify AI-written content about half of the time, research shows.

At our Global Congress 2023, Consumers International called for governments to ensure adequate protection against such activities on technology platforms in our Global Statement to Stop Online Scams. What’s needed is effective action and regulation to prevent, detect, disrupt and respond to them.

Those developing AI systems must acknowledge and report what they know about the potential for harm.

3. Responsibility

We also need to interrogate whether there are robust procedures for solving issues that arise, and whether the right levels of accountability and recourse have been put in place across industry, government and civil society. This includes the right to redress for consumers, disclosure of government access requests and intellectual property infringement.

Put another way, if an AI system creates a problem for a human, who is to blame – and who should fix it? Clear lines of accountability need to be drawn.

Much has been written about the potential for AI and other technologies to unfairly discriminate or perpetuate biases, but less about who should be held accountable for this, or whether there should be any recourse for those affected. There needs to be a strong debate about ways to appeal or contest decisions made by AI algorithms, for example, in credit lending, healthcare, insurance or hiring.

AI becoming ubiquitous

We’re all aware of the power AI has to reshape our lives in useful and effective ways. But the pace of change and lack of regulation warrants proactive policymaking around consumer protection.

With 2024 in full swing, we need to scrutinise what’s already in place and think about the best ways to harness opportunities like World Consumer Rights Day, where we have set a theme of Fair and Responsible AI for Consumers, the World Economic Forum’s Annual Meeting in Davos and the 2024 G20 Rio de Janeiro summit – to shape the debate and enact change.

Significant legislative efforts, like the European Union’s AI Act, were set in train last year, demonstrating the high-level awareness, support and momentum behind the idea of protecting consumers.

We all want to harness the power of technology and, if we do it responsibly, generative AI could have broad benefits with minimal downsides. Without discussion and mitigation of these risks, the outcome may be very different.

The time to put consumers first is now.