On February 16, 2017, the U.S. Consumer Financial Protection Bureau (the “CFPB”) issued a Request for Information (“RFI”) seeking feedback from the public on the use or potential use of alternative data and methodologies to assess consumer credit risk, thereby expanding access to credit for credit invisible segments of the population. Presently, most lenders make credit decisions using modeling techniques that rely on “traditional data” such as loan history, credit limits, debt repayment history, and information relating to tax liens and bankruptcy. Lenders use this information to assess creditworthiness of consumers to determine the likelihood of whether a consumer will default or become delinquent on a loan within a given period of time.
However, in the U.S. approximately 45 million Americans do not have access to credit as a result of not having a credit file with any credit bureau, or credit files that are too limited or stale to generate a reliable score (“credit invisible consumers”). The CFPB is seeking feedback on whether “alternative data”, such as payment information on phone bills, rent, insurance, utilities, as well as checking account transaction information and social media activities, could provide an innovative solution to this problem. Enhanced data points could potentially address the information asymmetry that currently prevents lenders from serving “credit invisible consumers”.
In Canada, privacy legislation and other laws place restrictions on the types of personal information and other information that can be used in making credit decisions; the role of consent is critical. In contrast, the U.S. lacks a comprehensive privacy regime and has not examined credit risk assessment from that perspective.
Through its RFI, the CFPB is seeking information from interested parties on the development and adoption of innovations relating to the application of alternative data, and current and potential consumer benefits and risks associated with such application. The following is a summary of some of the risks and benefits identified by the CFPB:
- Greater Access to Credit: Consumers without a traditional loan repayment history could substitute such data points with regular bill payments for cell phones, utilities or rent. This information could in some cases prove sufficient for a lender to assess the creditworthiness of consumers and perhaps deem them to be viable credit risks.
- Improved Credit Prediction: Access to more practical and nuanced information about a consumer’s financial behaviour, pattern and context could allow lenders to identify trends in the consumer’s profile. Social media job status changes could perhaps help identify those individuals with low credit scores who have surmounted previous financial obstacles, and have a much better future credit outlook than the credit score snapshot would suggest.
- Profitability, Cost Savings and Convenience: Currently, many lenders forego providing credit to consumers with poor credit scores or non-existent credit history. With better data, lenders can market their products and services to more consumers, thereby increasing revenues and profits.
- Privacy: As is the case with big data generally, the CFPB has identified privacy issues as one of the primary risks associated with the use of such alternative data. Most forms of alternative data include information that can reveal fairly intimate details about an individual, such as social media activity, behaviour and location patterns. Lender access to such data would likely need to be regulated and protected explicitly at the legislative level.
- Data Quality: Some types of alternative data may be prone to greater rates of error due to the potential that the quality standards required to be met for their original purpose are less rigorous relative to those applied to traditional data destined to be used in the credit approval process. Incomplete, inconsistent or inaccurate data could have a detrimental impact on lenders’ ability to correctly and fairly assess a consumer’s credit viability.
- Discrimination: Greater access to information also introduces the potential for discrimination. Machine learning algorithms may predict a consumer’s likelihood of default, but could correlate such probabilities with race, sex, ethnicity, religion, national origin, marital status, age or some other basis protected by law. Using alternative data as a proxy for identification of certain sub-groups of the population could be a violation of anti-discrimination laws.
- Unintended Consequences and Control: The CFPB has expressed concern that use of alternative data could have unintended negative consequences for some consumers. For example, frequent moves and address changes by members of the military could create a false impression of overall instability. Alternative data could also include information about consumers that is beyond their control. Such data would make it difficult for consumers to improve their credit profile and thereby harden barriers to economic and social mobility.
The Canadian regulatory landscape already addresses many of the risks identified in the CFPB RFI. Provincial credit reporting legislation governs the use of traditional credit data and includes a number of safeguards intended to protect consumers. For example, an entity which for profit furnishes credit information or personal information, or both, pertaining to a consumer to a third party for the purposes of that third party’s credit assessment is required to be licensed and regulated as a credit reporting agency in Ontario.
In addition, under such legislation, consumers have certain rights, including the right to be notified when a credit application they have made has been refused based on their credit score (otherwise known as the requirement to provide “adverse action letters” to credit applicants), and the ability to access, review and correct their credit report (for example if the credit report includes incorrect information as a result of identity theft or error).
However, the potential use by lenders of non-traditional credit data which consumers may not be aware of, or able to access and correct, could lead to similar data quality concerns in Canada as identified above in the U.S. It is worth noting that the to the extent the non-traditional data points are “personal information”, Canadian consumers would have a right under privacy legislation to access and/or correct any such information.
The privacy and discrimination concerns as outlined above in the U.S. have , in large part, been addressed in Canada through human rights legislation and privacy laws, although the advent of Big Data and analytics techniques (including the use of aggregate and anonymous personal information) is making once-clear regulatory boundaries significantly murkier. The Office of the Privacy Commissioner of Canada (“OPC”) recognized this in its recent Discussion Paper Exploring Potential Enhancements to Consent under the Personal Information Protection and Electronic Documents Act, where it noted that the challenges of obtaining meaningful, valid consent in a Big Data world. The OPC thought that some of the Big Data concerns could potentially be addressed by sectoral “codes of practice” (and observed that in Australia, credit reporting bureaus can develop codes of practice which are then registered by the commissioner there).
The OPC has also explored the idea of legislating “no-go zones” of data – in short, prohibiting the collection, use or disclosure of personal information in certain circumstances. They could be based on a variety of criteria, such as the sensitivity of the data, the nature of the proposed use or disclosure or vulnerabilities associated with the group whose data is being processed. Alternative means of assessing credit risk, and attend concerns about the sensitivity of this information and the potential for discriminatory impacts, suggest that this type of use of this information may attract future regulatory scrutiny.