Building consumer trust beyond the hype: Navigating ethical data practices in generative AI

Guest Blog 

Last week, Consumers International led the global campaign for World Consumer Rights Day - celebrated each year on 15 March. Our 2024 campaign called for Fair and Responsible AI for Consumers. We focussed on the cross-sector collaboration needed to ensure consumers are protected across the AI lifecycle - from development to deployment to use.

In this blog, Oarabile Mudongo from Consumers International joins Chandni Gupta and Marianne Campbell from Consumer Policy Research Centre (Australia) to discuss the challenge of upholding data ethics and transparency in an age of AI.

It's been less than two years since the release of ChatGPT to the public, but generative AI has already been adopted by millions of consumers in homes, workplaces and education settings across the world. The adoption of this disruptive technology is moving fast. Much faster than other major tech developments of the past decade, like the smartphone or tablet. But as it's user base climbs, there is delicate balance to be struck between innovation and responsibility. The drive to push forward in this space must be tempered by a careful navigation of generative AI's ethical implications.

In particular, generative AI has raised significant concerns regarding data privacy and consumer trust. It is imperative that we explore the foundational ethical principles guiding the development and deployment of generative AI. By embedding these principles, we can navigate the complex terrain of AI ethics and propose strategies for responsible innovation. 

Transparency as a pillar of consumer confidence

Transparency is a linchpin for consumer trust - especially when it comes to new tech. Consumers are already rightfully concerned about how their data is being collected, used and processed. In recent research with UNSW Sydney, Consumer Policy Research Centre (CPRC) (Australia)'s Singled Out study found that 72% of Australians believe they have little to no control over information collected by businesses with which they have no direct interaction. Addressing these concerns in the age of AI will mean building transparency in to all stages AI lifecycle. This means clear communication about data collection practices and algorithmic decision-making - and the potential implications for consumers. CPRC spelt this out in their three-tier model of AI transparency, proposed to the Australian Government. 

  1. Pre-implementation: through pre-market impact assessments and regulatory sandboxes that can identify potential harm before an AI system release.
  2. Throughout the lifecycle: through regular system assessments and reporting, recognising that AI systems are dynamic, evolving over time.
  3. To consumers and the community: via disclosures at point of use to consumers so there is adequate awareness that AI has been used to curate what they are viewing, experiencing, being offered etc.

This three-tiered approach aligns with the Foundation Model Transparency Index, which considers upstream (data and development process), model (inner workings and limitations), and downstream (impact on users and society) aspects of AI. 

Navigating data privacy in the age of AI 

Central to ethical AI practice is ensuring consumers can exercise control over their personal data in a meaningful way. CPRC's 2022 research into privacy found that only 7% of Australians feel companies give them real choices to protect their privacy online, and only 15% believe businesses are doing enough to protect their privacy when it comes to how their personal information is collected, shared and used. To achieve a truly equitable and transparent digital ecosystem, we need a fundamental shift in data governance. Businesses and governments must prioritise consumer-centric approaches. This means empowering individuals with control over their data and ensuring it is used ethically and responsibly. However, the onus should not be on consumers to become data security experts. By prioritising consumer trust and building robust data governance frameworks, we can foster a digital space space where everyone can benefit. 

Joahna Kuiper / Better Images of AI / Little data houses (square) / CC-BY 4.0

Generative AI presents unique challenges in terms of data privacy. Large language models rely on vast amounts of user data, so implementing robust privacy-preserving techniques is essential to safeguard sensitive information. Techniques like federated learning (where devices collaborate on a model without sharing individual data), differential privacy (adding noise to protect details), and secure multiparty computation (working together on encrypted data) all enable collaborative model training while preserving the privacy of individual user data. 

Giving consumers control over their data requires user-friendly interfaces and transparent data management practices. Companies should provide clear opt-in mechanisms for data sharing, with opt-out being the default.  They should also offer options to choose exactly what data is shared and make privacy policies readily accessible and in plain language.  

Global Collaboration, Consensus and Action


The challenge of AI ethics transcends geographical boundaries. International organisations, governments, academia, industry stakeholders, and civil society must collaborate to develop frameworks promoting ethical AI practices. Initiatives like the Partnership on AI, IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, and the EU's AI Ethics Guidelines present a collaborative incentive, driving ethical AI innovation. Engaging diverse perspectives and expertise is crucial for comprehensively addressing AI ethics challenges.

Moreover, global collaboration is vital for addressing AI's ethical implications beyond technical considerations. Examining the socioeconomic impacts, ensuring fairness in algorithmic decision-making, and addressing broader ethical questions like autonomy, accountability, and human dignity are essential. Fostering interdisciplinary dialogue and collaboration develops holistic approaches upholding fundamental values across diverse contexts. Ensuring ethical data practices in generative AI is imperative for building consumer trust and fostering responsible innovation. Prioritising transparency will help clear the path as we navigate AI's ethical complexities. In turn, creating a more ethical, inclusive, and hopeful digital future.