Credit Reporting, Data, and AI: Three Key Issues
Credit reporting has benefits for both consumers and providers, with the potential to promote more responsible lending and reduce the risk of defaults. However, developments in the use of data and AI could be further complicating an already controversial area.
A recent consultation with Consumers International’s members revealed a complex international picture. There are significant differences between countries regarding who is responsible for credit reporting, including public bureaus, large private bureaus, and an increasing number of fintechs conducting credit checks either for their own lending or for third parties. Our member BEUC highlighted that even within Europe there is no common system.
There are also significant differences in the data that is used. While some assessments use limited data about previous credit defaults, there is growing use of more financial data, such as the payment of utility bills, as well as non-financial data. This may include social media, browsing history, and even psychometric testing.
Differences between countries are not necessarily a problem, however some practices are raising concerns. Three key issues emerge for consumers and credit reporting around the world.
Consumers International’s members suggest there may be widespread confusion about credit reporting. Research by CHOICE revealed that 1 in 3 Australian consumers said they didn’t understand how credit reporting works. Similarly, Which? in the UK found that only 5% of participants could correctly identify what actions impacted their credit scores from a list of 20 options.
This confusion also extends to a lack of understanding about the control consumers have over their information. In South Korea, the UK and Australia, consumers have the right to access a limited number of free credit reports, but they are often unaware of this right or believe that applying for a credit report may damage their credit score.
This is a particular concern as studies have shown there can be a myriad of mistakes contained in assessments. Consumer Reports noted that the Federal Trade Commission found that roughly one in five consumers, or roughly 40 million people, had an error in one of their credit reports.
Consumers are often unaware that their data is being used in credit reporting. They may have given consent by simply ticking a box at the end of long and complex Terms and Conditions or it may not even have been required in the first place. In Brazil, there is concern that recent legislation allows some data about consumers to be sent to bureaus without their prior consent.
This lack of transparency could worsen with the advent of ‘super-apps’ which offer consumer credit alongside services such as social media, e-commerce, and payments. Understanding how a consumer’s data is collected and used in such a situation could be particularly difficult. Social media data is already being used by some fintechs in concerning ways – for example, Lenddo reduces a consumer’s credit score if they are Facebook friends with someone who has failed to repay a Lenddo loan on time.
Even in regions with robust data protection laws, such as the EU, there are still risks. There is a tension in the EU’s General Data Protection Regulation (GDPR) between the concepts of data minimalization and legitimate use. Businesses may be able to exploit the latter if they can demonstrate that using certain types of data is sufficiently predictive in credit reporting.
Finally, there are also questions about the increasing use of credit reporting by providers of non-financial services. There are reports from Kenya that credit reports have been used by potential employers, while German campaigners have highlighted that your SCHUFA (credit) score can be used by landlords and network providers.
Correlation, not causation
With alternative data sources, the aim is to create a more holistic picture of the consumer’s ability to repay a loan by collating a wider range of data and rapidly analysing it with algorithms. If the use of algorithms and AI in credit reporting becomes more widespread and advanced, incorporating machine learning rather than simple analysis, consumers could face further risks.
While AI can be adept at spotting patterns, this tends to identify correlation rather than causation. One study used digital footprints to predict who would pay back a loan and found that Mac computer and Gmail account users were better credit risks than PC and Hotmail account users. This highlights the potential for algorithms to discriminate based on correlations, which becomes far more concerning when applied to social characteristics like race, gender, or income.
Chi Chi Wu of the National Consumer Law Center testified before the US House of Representatives in July 2019 on the use of alternative data in credit scoring, noting that the use of algorithms could reproduce disparities and ‘bake-in’ prejudice. She promoted the need to consider broader societal implications rather than just an algorithm’s predictiveness.
Lastly, it is crucial to note that the results algorithms produce can only ever be as good as the information they use. Given serious concerns about the accuracy and relevance of some credit scoring data, AI should not be seen as a miracle solution to existing problems.
Despite the concerns that arise with the use of data and AI in credit reporting, it is important to remember its potential benefits; it can be a useful tool in financial inclusion, giving consumers with little formal financial history access to credit they would have otherwise been denied and creating a more efficient process for all. Moreover, while the risks of prejudice in AI are worrying, human bias remains a problem in more traditional assessment processes as well.
Nonetheless, this new technology needs to be managed carefully to prevent over-indebtedness and potential exclusion. Key next steps for consumer advocates and regulators include ensuring that consumers have easy access to clear information about credit reporting, as well as transparency and controls over how their credit reporting data is collected and used. At the same time, businesses who are using AI need to incorporate accountability into their designs.
Given the concerns and difficulties that consumers already face with credit reporting, these steps are crucial for harnessing the opportunities that new technology presents while minimising the risks. Without them, we may find ourselves in a situation where the costs of credit are more than just financial.
This blog summarises a presentation we gave at FinCoNet’s annual meeting in November 2019. To learn more about our work on financial services, read our Banking on the Future report and get in contact with our Director of Advocacy, Justin Macmullan at email@example.com.