Scoring and Modeling—— Underwriting and Loan Approval Process
Posted webrobot
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Scoring and Modeling—— Underwriting and Loan Approval Process相关的知识,希望对你有一定的参考价值。
https://www.fdic.gov/regulations/examinations/credit_card/ch8.html
Types of Scoring
FICO Scores
VantageScore
Other Scores
Application Scoring
Attrition Scoring
Bankruptcy Scoring
Behavior Scoring
Collection Scoring
Fraud Detection Scoring
Payment Projection Scoring
Recovery Scoring
Response Scoring
Revenue Scoring
Dual-Scoring Matrix
Credit Scoring Model Development
Basel Considerations Regarding Credit Scoring
Validation
Cut-off Score
Validation Charts and Calibration
Overrides
Credit Scoring Model Limitations
Automated Valuation Models
Summary of Examination Goals – Scoring and Modeling
VIII. Scoring and Modeling
Scoring and modeling, whether internally or externally developed, are used extensively in credit card lending. Scoring models summarize available, relevant information about consumers and reduce the information into a set of ordered categories (scores) that foretell an outcome. A consumer‘s score is a numerical snapshot of his or her estimated risk profile at that point in time. Scoring models can offer a fast, cost-efficient, and objective way to make sound lending decisions based on bank and/or industry experience. But, as with any modeling approach, scores are simplifications of complex real-world phenomena and, at best, only approximate risk.
Scoring models are used for many purposes, including, but not limited to:
- Controlling risk selection.
- Translating the risk of default into appropriate pricing.
- Managing credit losses.
- Evaluating new loan programs.
- Reducing loan approval processing time.
- Ensuring that existing credit criteria are sound and consistently applied.
- Increasing profitability.
- Improving targeting for treatments, such as account management treatments.
- Assessing the underlying risk of loans which may encourage the credit card backed securities market by equipping investors with objective measurements for analyzing the credit card loan pools.
- Refining solicitation targeting to minimize acquisition costs.
Credit scoring models (also termed scorecards in the industry) are primarily used to inform management for decision making and to provide predictive information on the potential for delinquency or default that may be used in the loan approval process and risk pricing. Further, credit risk models often use segment definitions created around credit scores because scores provide information that can be vital in deploying the most effective risk management strategies and in determining credit card loss allowances. Erroneous, misused, misunderstood, or poorly developed and managed scoring models may lead to lost revenues through poor customer selection (credit risk) or collections management. Therefore, an examiner‘s assessment of credit risk and credit risk management usually requires a thorough evaluation of the use and reliability of the models. The management component rating may also be influenced if governance procedures, especially over critical models, are weak. Regulatory reviews usually focus on the core components of the bank‘s governance practices by evaluating model oversight, examining model controls, and reviewing model validation. They also consider findings of the bank‘s audit program relative to these areas. For purposes of this chapter, the main focus will be scoring and scoring models. A brief discussion on validating automated valuation models (AVM) is included in the Validation section of this chapter, and loss models are discussed in the Allowances for Loan Losses chapter. Valuation modeling for residual interests is addressed in the Risk Management Credit Card Securitization Manual.
Scoring models are developed by analyzing statistics and picking out cardholders‘ characteristics thought to be associated with creditworthiness. There are many different ways to compress the data into scores, and there are several different outcomes that can be modeled. As such, scoring models have a wide range of sophistication, from very simple models with only a few data inputs that predict a single outcome to very complex models that have several data inputs and that predict several outcomes. Each bank may use one or more generic, semi-custom, or custom models, any of which may be developed by a scoring company or by internal staff. They may also use different scoring models for different types of credit. Each bank weighs scores differently in lending processes, selects when and where to inject the scores into the processes, and sets cut-off scores consistent with the bank‘s risk appetite. Use of scoring models provides for streamlining but does not permit banks to improperly reduce documentation required for loans or to skip basic lending tenants such as collateral appraisals or valuations.
Practices regarding scoring and modeling not only pose consumer lending compliance risks but also pose safety and soundness risks. A prominent risk is the potential for model output (in this case scores) to incorrectly inform management in the decision-making process. If problematic scoring or score modeling cause management to make inappropriate lending decisions, the bank could fall prey to increased credit risk, weakened profitability, liquidity strains, and so forth. For example, a model could wrongly suggest that applicants with a score of XYZ meet the bank‘s risk criteria and the bank would then make loans to such applicants. If the model is wrong and scores of XYZ are of much higher risk than estimated, the bank could be left holding a sizable portfolio of accounts that carry much higher credit risk than anticipated. If delinquencies and losses are higher than modeling suggests, the bank‘s earnings, liquidity, and capital protection could be adversely impacted. Or, if such accounts are part of a securitization, performance of the securitization could be at risk and could put the bank‘s liquidity position at risk, for instance, if cash must be trapped or if the securitization goes into early amortization. A poorly performing securitization would also impact the fair value of the residual interests retained.
Well-run operations that use scoring models have clearly-defined strategies for use of the models. Since scoring models can have significant impacts on all ranges of a credit card account‘s life, from marketing to closure, charge-off, and recovery, scoring models are to be developed, implemented, tested, and maintained with extreme care. Examiners should expect management to carefully evaluate new models internally developed as well as models newly purchased from vendors. They should also determine whether management validates models periodically, including comparing actual performance to expected performance. Examiners should expect management to:
- Understand the credit scoring models thoroughly.
- Ensure each model is only used for its intended purpose, or if adapted to other purposes, appropriately test and validate it for those purposes.
- Validate each model‘s performance regularly.
- Review tracking reports, including the performance of overrides.
- Take appropriate action when a model‘s performance deteriorates.
- Ensure each model‘s compliance with consumer lending laws as well as other regulations and guidance.
Most likely, scoring and modeling will increasingly guide risk management, capital allocation, credit risk, and profitability analysis. The increasing impetus on scoring and modeling to be embedded in management‘s lending decisions and risk management processes accentuates the importance of understanding scoring model concepts and underlying risks.
Types of Scoring
Some banks use more than one type of score. This section explores
scores commonly used. While most scores and models are generally
established as distinct devices, a movement to integrate models and
scores across an account‘s life cycle has become evident.
FICO Scores
Credit bureaus offer several different types of scores. Credit bureau scores are typically used for purposes which include:
- Screening pre-approved solicitations.
- Determining whether to acquire entire portfolios or segments thereof.
- Establishing cross-sales of other products.
- Making credit approval decisions.
- Assigning credit limits and risk-based pricing.
- Guiding account management functions such as line increases, authorizations, renewals, and collections.
The most commonly known and used credit bureau scores are called FICO scores. FICO scores stem from modeling pioneered by Fair, Isaac and Company (now known as Fair Isaac Corporation) (Fair Isaac), hence the label "FICO" score. Fair Isaac devised mathematical modeling to predict the credit risk of consumers based on information in the consumer‘s credit report. There are three main credit bureaus in the United States that house consumers‘ credit data: Equifax, TransUnion, and Experian. The credit-reporting system is voluntary, and lenders usually update consumers‘ credit reports monthly with data such as, but not limited to, types of credit used, outstanding balances, and payment histories. A consumer‘s bureau score can be significantly impacted by a bank‘s reporting practices. For instance, some banks have not reported certain information to the bureaus. If credit limits are not reported, the score model might use the high balance (the reported highest balance ever drawn on the account) in place of the absent credit limit, potentially inflating the utilization ratio and lowering the credit score. Errors in, or incompleteness of, consumer-provided or pubic record information in credit reports can also impact scoring. Consumer-supplied information comes mainly from credit applications, and items of public record include items such as bankruptcies, court judgments, and liens.
Each bureau generates its own scores by running the consumer‘s file through the modeling process. Although banks might not use all three bureaus equally, the scoring models are designed to be consistent across the bureaus (even though developed separately). Thus, an applicant should receive the same or a similar score from each bureau. In reality, variations (usually minor) arise due to differences in the way the bureaus collect credit information (for example, differences in the date of data collection) or due to discrepancies among information the bureaus, which could include inaccurate information. FICO scores rank-order consumers by the likelihood that they will become seriously delinquent in the 24 months following scoring. FICO scores of 660 or below may be considered illustrative of subprime lending (as set forth in the January 2001 Expanded Guidance for Subprime Lending), although other characteristics are normally considered in subprime lending determinations as well.
Benefits of credit bureau scoring include that it is readily available, is relatively easy to implement, can be less expensive compared to internal models, and is usually accompanied by various bureau-provided resources. Disadvantages include that scoring details are, for the most part, confidential and that it is available to every lender (no competitive differentiation).
As is the case for any type of scores generated by models, FICO scores are inherently imperfect. Nevertheless, they usually maintain effective rank ordering and can be useful tools, particularly when resource or volume limitations preclude the development of a custom score. Several types of FICO scores are in use including Classic FICO, NextGen FICO Risk, FICO Expansion, and FICO Industry Options. Collectively, the scores are called FICO scores in this manual.
There are three different Classic FICO scores, one at each of the bureaus. According to www.fairisaac.com, they are branded as Beacon scores at Equifax; FICO Risk or Classic (formerly known as EMPIRICA) scores at TransUnion; and Experian/Fair Isaac Risk Model scores at Experian. Scores range from 300 to 850, with higher scores reflecting lower credit risk.
NextGen FICO Risk scores draw their name from being touted as the "next generation" of credit bureau scores. They are branded as Pinnacle at Equifax; FICO Risk Score, NextGen (formerly PRECISION) at TransUnion; and Experian/Fair Isaac Advanced Risk Score at Experian. Compared to Classic scores, NextGen scores are reported to use more complex predictive variables, an expanded segmentation scheme, and a better differentiation between degrees of future payment performance. According to www.fairisaac.com, the score range, 150 to 950, is widened, although odds-to-score ratios at interval score ranges remain the same. Cumulative odds may vary.
For accounts lacking sufficient credit file information to generate a Classic or NextGen FICO score, some lenders use the FICO Expansion score. The FICO Expansion score, introduced in 2004, likely draws its name from "expanding" the credit information considered in the score to beyond that collected in a standard credit report. The expanded information includes items such as payday loans, checking account usage, and utility and rental payments. The FICO Expansion score has the same range and scaling as the Classic scores.
FICO Industry Options scores draw their name from being specific to several options of industries, such as bankcard.
VantageScore
The bureaus historically used their own proprietary models (based on Fair Isaac modeling) to develop FICO scores. However, in 2006, the bureaus introduced a new scoring system under which a single methodology is used to create scores at all three bureaus. The new system is called VantageScore. Because a single methodology is used, the score for each consumer should virtually be the same across all three bureaus. Any differences are attributed to differences in data in the consumer‘s files. The score will continue to incorporate typical consumer report file content but will range from 501 to 990. The scores are scaled similar to the letter grades of an academic scale (A, B, C, D, and F). Again, the higher the score, the lower the credit risk. Consumers may likely have VantageScores that are higher than their FICO scores. This is due to scaling and that phenomena alone does not indicate that a consumer is a better credit risk than he or she was under the traditional FICO score system. Further, when determining whether subprime lending exists, the new scale will need to be considered (in other words, 660 may not be a benchmark when looking at VantageScores). The industry‘s rate of replacement of custom and generic scores with VantageScore remains to be seen as of the writing of this manual.Other Scores
In addition to or instead of generic credit bureau scores, many banks use other types of scores. Brief discussions on a variety of these scores follow, in alphabetical order. The bureaus and other vendors offer models for many of these types of scoring.Application Scoring:
Application scoring involves assigning point values to predictive variables on an application before making credit approval decisions. Typical application data include items like length of employment, length of time at current residence, rent or own residence, and income level. Points for the variables are summed to arrive at an application score. Application scores can help determine the credit‘s terms and conditions.
Attrition Scoring:
Attrition scores attempt to identify consumers that are most likely to close their accounts, allow their accounts to go dormant, or sharply reduce their outstanding balance. Identification of such accounts may allow management to take proactive measures to cost-effectively retain the accounts and build balances on the accounts.
Bankruptcy Scoring:
Bankruptcy scores attempt to identify borrowers most likely to declare bankruptcy. HORIZON (by Fair Isaac) is a common credit bureau bankruptcy score.
Behavior Scoring:
Behavior scoring involves assigning point values to internally-derived information such as payment behavior, usage pattern, and delinquency history. Behavior scores are intended to embody the cardholder‘s history with the bank. Their use assists management with evaluating credit risk and correspondingly making account management decisions for the existing accounts. As with credit bureau scores, there are a number of scorecards from which behavior scores are calculated. These scorecards are designed to capture unique characteristics of products such as private label, affinity, and co-branded cards.
Behavior scoring systems are often periodically supplemented with credit bureau scores to predict which accounts will become delinquent. Using a combination allows management to evaluate the composite level of risk and thus vary account management strategies accordingly.
Adaptive control systems (ACS) commonly use behavior scoring. ACS bring consumer behavior and other attributes into play for decisions in key management disciplines (for instance, line management, collections, and authorizations) so as to reduce credit losses and increase promotional opportunities. ACS include software packages that assist management in developing and analyzing various strategies taking into account the population and economic environment. They are a combination of software actionable analytics and optimization techniques and use risk/reward logic. ACS recognize that accounts can go in several directions. They consider the possible outcomes of the options and determine the "best" move to make. With ACS, challenger strategies can be tested on a portion of the accounts while retaining the existing strategy (champion strategy) on the remainder. Continual testing of alternative strategies can help the bank achieve better profits and control losses. Many large banks use TRIAD (developed by Fair Isaac) or a similar ACS, but smaller banks may lack the capital or the infrastructure to implement such a process.
Collection Scoring:
Collection scoring systems rank accounts by the likelihood of taking delivery of payments due. They are used to determine collection strategies, collection queue assignments, dialer queue assignments, collection agency placement, and so forth. Collection scores are normally used in the middle to late stages of delinquency.
Fraud Detection Scoring:
Fraud detection scores attempt to identify accounts with potential fraudulent activity. Fraud continues to be pervasive in the credit card lending industry and detection of potential fraudulent activity can help identify and control losses as well as assist management in developing fraud prevention controls.
Payment Projection Scoring:
Payment projection scoring models use internal data to rank accounts, normally by the relative percentage of the balance that is likely to be repaid. Some models only forecast the relative percentage, while others rank the likelihood a cardholder will pay a moderate to high level of the account balance. The scores are normally used in the early to middle stages of delinquency.
Recovery Scoring:
Recovery scoring models rank order the amount of recovery that is expected after charge-off. They aid management in deploying the necessary resources where collection is most likely and help with agency placement and sale decisions.
Response Scoring:
Response scoring models are used to manage acquisition costs. By identifying the consumers that are most likely to respond, a bank is able to tailor its marketing campaigns so as to target its marketing toward those consumers that are most likely to respond and to steer away from spending marketing dollars on consumers that are least likely to respond.
Revenue Scoring:
Revenue scoring models rank order the potential revenue expected to be generated on new accounts during the first 12-month period. The models use predictive indicators such as usage ratios, the level of revolving balances, and other card-usage patterns. Revenue scoring allows management to focus marketing initiatives on what are expected to be the most profitable accounts. Used in conjunction with credit bureau scores in screening applicants, they allow management to evaluate the revenue potential as well as the risk ranking of prospects. Consequently, management is better able to identify its target market and tailor its solicitations to that market.
Revenue scoring is also used to manage existing accounts according to revenue potential. Strategies can be formulated recognizing the risk, revenue, and frequency of cardholder use. From this information, management is better able to reward low-risk, product-loyal consumers by reducing APRs or waiving fees. Conversely, management is apt to raise APRs and fees for consumers who exhibit higher risk or that evidence little product loyalty.
Dual-Scoring Matrix
A dual-scoring matrix is a system which uses one score on one axis
and another score on its other axis. Examiners should normally expect to
see dual scoring in more complex credit card operations. Any scoring
system may interface with another, but a commonly employed dual-scoring
matrix uses application and credit bureau scores. The use of two scores
allows management to more effectively segment applicants. Each score has
a cut-off level (as discussed later in this chapter). Applicants that
either pass or fail both cut-off scores are either accepted or rejected,
respectively. A gray area arises when an applicant passes one cut-off
but fails the other. These situations afford management a greater
opportunity to maximize approvals or minimize losses by including
potentially good credit risk or by excluding potentially bad credit risk
that may have gone undetected in a single-scoring system. Taking
advantage of this opportunity requires a thorough tracking system so
that management can determine the historical loss rates for the score
combinations in the gray area. Cut-off scores can then be adjusted so
that the best scoring combinations are approved and so that applicants
who would be approved under a single-score system, yet still pose
unacceptable risks, can be identified and excluded.
Credit Scoring Model Development
Scoring can be done with generic models, semi-custom models, or
custom models. When properly designed, models are usually more reliable
than subjective or judgmental methods. However, development and
implementation of scoring models and review of these models present
inherent challenges. These models will never be perfectly right and are
only good if users understand them completely. Further, errors in model
construction can lead to inaccurate scoring and consequently to booking
riskier accounts than intended and/or to a failure to properly identify
and address heightened credit risk within the loan portfolio. Errors in
construction can range from basic formula errors to sample-bias to use
of inappropriate predictive variables.
A scoring model evaluates an applicant‘s creditworthiness by bundling key attributes of the applicant and aspects of the transaction into a score and determines, alone or in conjunction with an evaluation of additional information, whether an applicant is deemed creditworthy. In brief, to develop a model, the modeler selects a sample of consumer accounts (either internally or externally) and analyzes it statistically to identify predictive variables (independent variables) that relate to creditworthiness. The model outcome (dependent variable) is the presumed effect of, or response to, a change in the independent variables.
The sample selected to build the model is one of the most important aspects of the developmental effort. A large enough sample is needed to make the model statistically valid. The sample must also be characteristic of the population to which the scorecard will be applied. For example, as stated in the March 1, 1999 Interagency Guidance on Subprime Lending (Subprime Lending Guidance), if the bank elects to use credit scoring (including application scoring) for approvals or pricing in a subprime lending program, the scoring model should be based on a development population that captures the behavioral and other characteristics of the subprime population targeted. Because of the significant variance in characteristics between subprime and prime populations, banks offering subprime products should not rely on models developed solely for products offered to prime borrowers.
Both a large number of good and bad accounts are necessary to maximize the model‘s effectiveness. There are no hard and fast rules, but the sample selected normally includes at least 1,000 good, 1,000 bad, and about 750 rejected applicants. Often, the sample contains a much higher volume of accounts. The definition of good and bad accounts (the dependent variable) differs among banks, especially between prime and subprime issuers. Furthermore, definitions of bad for scoring purposes are not necessary the same as definitions of bad used by banks for charge-off or nonaccrual consideration. For prime portfolios, good accounts tend to be defined as accounts with sufficient credit history and little or no delinquency. Bad accounts for prime portfolios are normally distinguished by adverse public records, delinquency of 90 days or more, accounts with a history of delinquency, and accounts charged-off. Rejected applicants are applicants that management refused to accept because of their risk parameters. Certain inferences are made to break down the rejected applicants into good and bad accounts. This procedure, known as reject inferencing, makes certain assumptions on how rejected applicants would have performed had they been accepted and attempts to mitigate any accept-only bias of the sample. The process is used as it would be cost-prohibitive and potentially detrimental to make loans to consumers who would otherwise be rejected just for the sake of improving models.
After a representative sample has been assembled, the accounts are analyzed to determine the characteristics and attributes common to each group. The characteristics may be based on data sources such as the consumer‘s credit report, the consumer‘s application, and the bank‘s records. Characteristics are the questions asked on the application or performance categories of the credit bureau report. Attributes are the answers given to questions on the application or entries on the credit bureau report. For example, if education is a characteristic, college degree or high school diploma illustrate possible attributes.
The characteristics, which may number in the hundreds, are refined into a much smaller group of predictive variables, which are those items thought to best indicate whether a new applicant will eventually fall into the good or bad performance category. Ideally, the predictive variables also maintain a stable relationship with the performance measurement over-time. Commonly used predictive variables include, but are not limited to, prior credit performance, current level of indebtedness, amount of time credit has been in use, pursuit of new credit, time at present address, time with current employer, type of residence, and occupation. Examiners should expect that management has excluded factors lacking predictive value or that by law cannot be used in the credit decision-making process (such as race).
Once the predictive variables have been selected, points are assigned to the attributes of those variables. Each attribute is awarded points, and determining the number of points to award each attribute may be the most difficult element of the process. There are several methods for calculating and assigning points, all using a form of multivariate statistics. A scoring table is constructed, for which characteristics are on one axis and attributes are on the other axis. Points are awarded to each cell of the matrix. The consumer‘s characteristics and attributes are compared with the scoring table, or scorecard, and are awarded points according to where they fall within the table. The points are tallied to arrive at the overall score. Whether a high score means low or high risk depends on the model‘s construction.
Once designed and prior to implementation, the model is evaluated for integrity, reliability, and accuracy by a party independent of its design. This process is referred to as validation. A sample from the development sample may be held-out and scored with the new model. Performance is then monitored, and a model that demonstrates separation and rank ordering on the hold-out sample is considered valid. Validations for independent samples are also usually conducted prior to release of the model and post-implementation.
Validation has long been fundamental to a successful score modeling process, and evaluating a bank‘s model validation process has long been a central component of the examination. The Subprime Lending Guidance requires management to review and update models for subprime lending to ensure that assumptions remain valid. Validation is also an integral part of the proposed rulemaking for the revised Basel capital accord.
Basel Considerations Regarding Credit Scoring
A brief discussion on the new Basel capital accord is housed in the
Capital chapter. Under the proposed rulemaking, banks that use an
Internal Ratings Based (IRB) approach would use internal estimates of
certain risk parameters as key inputs when determining their capital
requirements. The IRB approach requires banks to assign each retail
exposure to a segment or pool with homogeneous risk characteristics.
These characteristics are often referred to as primary risk drivers and
may include credit scores.
A bank must be able to demonstrate a strong relationship between the IRB risk drivers (such as scores) and comparable measures used for credit risk management. Thus, even if a bank uses custom scores for underwriting or account management, generic bureau scores could possibly be used for IRB segmentation purposes if the bank can demonstrate a strong correlation between these measures. A bank using credit scores as segmentation criterion would have to validate the choice of the score (bureau, custom, and so forth) as well as demonstrate that the scoring system has adequate controls.
Examiners will expect that all aspects of the risk segmentation system, including credit scoring, are subject to thorough, independent, and well-documented validation. Validation for the risk segmentation system is ultimately tied to validation of the bank‘s quantification of IRB risk parameters. Examiners will also expect that the IRB validation process include:
- Evaluating the developmental evidence or logic of the system.
- Ongoing monitoring of system implementation and reasonableness (verification and benchmarking).
- Comparing realized outcomes with predictions (back-testing).
Validation
Examiners should determine whether management provides for
appropriate, ongoing validation of scoring models, whether used as part
of an IRB framework, for credit risk management, or for other purposes.
Validation is a process that tests the scoring system‘s ability to rank
order as designed and essentially answers whether the model is accurate
and working properly. Model validation does not only increase
confidence in the reliability of a model but also promotes improvements
and a clearer understanding of a model‘s strengths and weaknesses among
management and user groups. Model validation can be costly, particularly
for smaller banks. But, using un-validated models to manage risks is a
poor business practice that can be even more costly as well as lead to
safety and soundness concerns. Risks from not validating are elevated
when a bank bases its credit card lending decisions on the scoring model
alone (and does not consider other factors in the decision-making
process), when the model is otherwise vital, or when the model is
complex.
Examiners do not validate models; rather, validation is the responsibility of bank management. Examiners do, however, test the effectiveness of the bank‘s validation function by selectively reviewing aspects of the bank‘s validation work. Examiners could also identify concerns with a model‘s performance as a by-product of the credit risk review or other examination procedures.
Examiners should evaluate the bank‘s validation framework, including written validation policies, to determine if it is proper. Key elements of a sound validation policy generally include:
- Competent and Independent Review - The review should be as independent as practicable. The reviewer can be an auditor with technical skills, a consultant, or an internal party. In practice, model validation requires not only technical expertise but also considerable subjective business judgment.
- Defined Responsibilities - The responsibility for model validation should be formalized and defined just as the responsibility for model construction should be formalized and defined.
- Documentation - Validation cannot be properly performed if a sufficient paper trail of the model‘s design is not available. Weak documentation can be particularly damaging to the bank if the modeler leaves and the replacement is left with little to reference. Model documentation should summarize the general procedures used and the reasons for choosing those procedures, describe model applications and limitations, identify key personnel and milestone dates in the model‘s construction, and describe validation procedures and results. Technical complexity does not excuse modelers from the responsibility of providing clear and informative descriptions of the model to management.
- Ongoing validation - Validation should occur both pre- and post-implementation. Models should be subject to controls so that coding cannot be altered, except by approved parties. Most models are normally altered in response to changes in the environment or to incorporate improvements in understanding of the subject. Model alterations that are inappropriate can result in dodging risk limits or disguising losses.
- Auditor involvement - Examiners should expect that the bank‘s audit program ensures that validation policies and procedures are being followed.
A clear understanding of the scoring model‘s intended use is critical to properly assessing a model‘s performance. But, regardless of the intended use, the three key components of a validation process, as mentioned in the prior section, apply: evaluation of the conceptual soundness of the model; ongoing monitoring that includes verification and benchmarking; and outcomes analysis.
Evaluating conceptual soundness involves assessing the quality of the model‘s construction and design. Examiners should determine whether management reviews documentation and empirical evidence supporting the methods used and the variables selected in the model‘s design. Modelers adopt methods, decide on characteristics, and make adjustments. Each of these actions requires judgment, and validation should ensure that judgments are well-informed. Examiners should expect management to review developmental evidence for new models and when a material change is made to an existing model.
The purpose of the second component of validation, ongoing monitoring, is to confirm that the model was implemented appropriately and continues to perform as intended. Process verification and benchmarking are its key elements. Process verification includes making sure that data are accurate and complete; that models are being used, monitored, and updated as designed; and that appropriate action is taken if deficiencies exist. Benchmarking uses alternative data sources or risk assessment approaches to draw inferences about the correctness of model outputs before outcomes are actually known. The time needed to generate a sufficient number of representative accounts (good and bad) to evaluate the effectiveness of the model post-implementation will vary depending on the product-type or customer group. Consequently, benchmarking becomes an important tool in the validation process because it provides an earlier-read of model performance than is available from back-testing.
The third component of validation, outcomes analysis, compares the bank‘s forecasts of model outputs with actual outcomes. It should include back-testing, which is the comparison of the outcomes forecasted by the models with actual outcomes during a sample period not used in model development (out-of-sample testing).
Benchmarking and back-testing differ in that when differences are observed between the model output estimates and the benchmark, it does not necessarily indicate that the model is in error. Rather, the benchmark is an alternative prediction, and the difference may be due to different data or methods. When reviewing the bank‘s benchmarking exercises, examiners should find out whether management investigates the source of the differences and determines whether the extent of the differences is appropriate.
Examiners can compare the delinquency rate at each score interval as a simple test of overall performance of the scoring system. If the system is performing adequately, a correlation between the scores and delinquency rates (that is, delinquency rates increase as projected risk (as reflected in the scores) increases) should be evident. Examiners may also want to review the results of various tests that management may be using. For example, divergence statistics and the population stability index are sometimes used. Divergence statistics measure the distance between the average score of satisfactory accounts and average score of unsatisfactory accounts. The greater the distance, the more effective the scoring system is at segregating good and bad accounts. If the difference is small, a new or redeveloped scoring system may be warranted. The population stability index compares divergence with the original development sample and helps identify and measure erosion in the model‘s predictive power. Other advanced statistical tools include Chi square, Kolomogorov-Smirnov (K-S) tests, and Gini coefficients. While examiners generally do not need to know the specifics of all of these types of tests, they should be aware that these tests are common in the industry and should expect management to be able to explain the validation tools used. Management‘s development of effective processes and exercise of sound judgment are just as important as the measurement technique used.
Incorporation of combinations of model expertise and skill levels in the validation process is not uncommon. For example, internal staff could be used to verify the integrity of data inputs while a third party could be used to validate model theory and code. Examiners should determine what management‘s procedures are for ensuring that vendors‘ validation procedures are appropriate and meet the bank‘s standards. Management is ultimately responsible for ensuring the validation processes used, whether internal or external, are appropriate and adequate.
While scoring models developed in-house are becoming more prevalent, banks continue to purchase a number of models from vendors and the bureaus. Vendors are sometimes unwilling to share key formulas, assumptions, and/or program coding. In these cases, the vendor typically supplies the bank with validation reports performed by independent parties. The independent party‘s work can only be relied on if the information provided is sufficient to determine the adequacy of the scope, the proper conveyance of findings to the vendor, and the adequacy of the vendor‘s response thereto. Examiners assessing risks of modeling activities should pay particular attention to situations in which management has exclusively relied on a vendor‘s general acceptance by others in the industry as sufficient evidence of reliability and has not conducted its own comprehensive review of the vendor and its practices.
Examiners should evaluate management‘s processes for re-tooling or re-developing models that exhibit eroding performance. If evidence reliably shows that the behavior shift is small and likely to be of short duration, a policy shift or change to the model may not be warranted. But, if evidence suggests that the behavior shift is material and is likely to be long-term, there are several approaches management may consider to limit losses, depending on the ability to identify the most likely reason(s) for the performance shift. It can adjust its underwriting policy to narrow the market to a group believed to perform better than the population in general. This usually involves making changes to the bank‘s business strategy and, thus, is rather limited as a short-term risk management tool. Banks may also develop or purchase scoring models based on more recent information about the current population. In this case, the bank must weigh the costs of developing or purchasing a model against that of carrying an increased number of bad accounts booked by the existing model. One of the most common, and often the easiest, adjustments is to manage the cut-off score to maintain a targeted loss rate consistent with profit objectives.
Cut-off Score
Each bank develops its own policies and risk tolerances for its
credit card lending programs. Setting cut-off scores is one way banks
implement those risk tolerances. A cut-off score is the point below
which credit will not be extended and at or above which credit will be
extended (assuming a higher score equates to better creditworthiness). A
bank might have more than one cut-off score, with each tailored to a
specific population. The ability to customize cut-off scores allows
management to maximize the approval rate without sacrificing asset
quality. Some banks have cut-off bands, which define a range of scores
for which the consumer would undergo additional judgmental review.
Selecting a cut-off score involves determining the optimum balance between approval and loss rates. Management evaluates how much additional revenue will be added if the approval rate is increased and what the cost associated with the incremental increase in the bad rate will be. They also often give consideration to marketing expenses and customer service expenses. How management chooses to balance the competing goals determines the cut-off score. Odds charts are often involved in setting cut-off scores and are discussed in the next section.
As time passes, cut-off scores and models become less predictive because of economic changes, demographic shifts, and entry into new markets. Examiners should assess management‘s practices for reviewing cut-off scores and models, including resulting acceptance and loss rates. By monitoring the rates, management can appropriately adjust the cut-off score to change either acceptance rates or loss rates, depending on the strategic goals. For example, management could grow the portfolio by lowering the cut-off score (when lower scores equate to higher risk), taking on an elevated degree of credit risk and accepting increased loss rates. These dynamics of the scoring environment highlight the need for thorough tracking and calibration procedures.
Validation Charts and Calibration
Most scores are rank-order measurements that, by themselves, are
generally not indicative of the likelihood or magnitude of an event or
outcome. Rather, they summarize a plethora of consumer data and
essentially do little except rank order the consumer‘s risk against the
risk of other consumers. But, in addition to this rank-ordering, scores
must give accurate outcome (usually default) probabilities to be the
most useful. Calibration is the process by which a model‘s output (in
this case scores) is converted into the actual rate of the outcome
(default) and includes adjusting or modifying for the difference between
the expected rate based on the historical database and the actual rate
observed. The process is aimed at converting or modifying the model‘s
output into a probability based on the expected odds for the historical
population and adjusting for the relevant population. Often, it is
thought of as the process of determining and fine tuning the grades or
gradation of a quantitative measuring system by comparing them with a
set standard or starting point. Frequently the standard used might be a
bureau‘s validation chart.
In general, validation charts (also commonly known as odds charts) reflect the estimate of the percentage of borrowers in a defined population who will evidence a certain trait or outcome, such as delinquency, loss, or bankruptcy. Examiners normally expect management to develop its own odds chart(s) when it has sufficient historical data. When properly developed, customized odds charts are more predictive than odds charts that are available from the bureaus. Validation charts available from the bureaus display the odds of poor performance (such as delinquency, loss, or bankruptcy) observed at a given bureau score. Each set of charts available from the bureaus is specific to a model, an industry, and an application (where application refers to how the scores will be used). For example, the bureaus have validation charts available for the bankcard industry and for subprime lending. The bureaus‘ validation charts can be helpful as a starting point for management in setting risk strategies but do not precisely predict the actual odds that each bank will experience. Rather, a bank‘s particular market will have different characteristics and, thus, different odds. The risk ranking based on bureau score will generally hold, but the actual odds of going bad that each score represents will vary between banks and portfolios. Thus, management must provide for sufficient calibration processes. For example, if the bureau odds chart indicates that 1 out of every 20 consumers with a credit score of XYZ will be a bad account and the bank is realizing 5 out of every 20 consumers with a credit score of XYZ is a bad account, calibration most likely is needed.
Calibration most often adjusts or refines an odds chart when significant variation exists from the general forecast. But, there are other instances for which the scores and scaling could be adjusted, or calibrated. For example, calibration might be used to make all scores positive. For example, if a model‘s scores are (52), (6), and 15, an entity could add 52 points, so the scores would be 0, 46, and 67. Also, calibration might be used to compress the scale (for example, if every 31 points doubles the odds of bad, a bank could calibrate the scale such that the bad odds are doubled every 20 points). Calibrations might also be done to make users feel comfortable (for example, if an existing cut off score is XYZ based on an internal model that predicts that one percent of accounts with a score of XYZ will be bad, then calibration could be used to ensure that accounts that are scored XYZ would continue to tie to the likelihood that one percent will be bad. In this way, the bank would not have to change the cut-off score to keep getting the same caliber of customers). Examiners should ascertain whether recent calibrations are well-documented and have been properly executed.
Overrides
Overrides are discussed in the Underwriting and Loan Approval Process
chapter. Exceptions outside of management‘s credit scoring parameters
are called overrides and may be high-side or low-side. When management
overrides the cut-off score, they introduce information into the
ultimate credit decision that is not considered in the scoring system.
If the scoring system is effectively predicting loss rates for a
designated population and the system reflects management‘s risk
parameters, examiners should expect that management use overrides with
considerable caution. Excessive overrides may negate the benefit for an
automated scoring system. A high volume of overrides is equivalent to
having no cut-off score and jeopardizes management‘s ability to measure
the success of the credit scoring system. Once a bank approves credits
that fail to meet the scoring system‘s criteria, it has broken its odds
and may be taking on higher levels of risk than acceptable for the
bank‘s risk appetite and/or capabilities to control. However, business
reasons may justify a temporary increase in override rates. For example,
when transitioning to a new system, override rates might rise until a
reasonable level of confidence in the new approach is achieved.
Credit Scoring Model Limitations
Determining whether scoring models are managed by people who
understand the models‘ strengths and weaknesses is an integral part of
the examination process. Users lacking a complete understanding of how
the models are made, how they should be used, or how they interface with
the bank‘s lending policies and procedures can expose the bank to
risks, as discussed throughout this chapter. Scoring is only useful if
its limitations are properly understood, and examiners should draw a
conclusion about whether an understanding of the model‘s limitations by
management is evident.
One limitation is that scoring model output is only as good as the input that is used. If data going into the scoring model is inaccurate (for instance, if information on the consumer‘s credit bureau report is erroneous), the model‘s output (score) will be erroneous. Depending on how the erroneous information is weighted in the scoring formula, the impact on the score could be substantial. Moreover, if management does not select and properly weight the best predictive variables, the model‘s output will likely be less effective than had the most predictive variables been used and properly weighted. Management must make sure that the variables used in the models are appropriate, predictive, and properly weighted to arrive at the best credit decision and that data inputs are complete and accurate.
The effectiveness of the model output (scores) can also be constrained by factors such as changing economic conditions and business environments. Examiners should identify whether management monitors warning signs of market deterioration, such as increases in personal bankruptcies, which may affect the accuracy of model assumptions. Robust models are typically more resilient to these types of changes.
Models, even if good at risk-ranking an overall market segment, can be limited if they do not reflect the bank‘s population. A model is typically developed for a certain target population and may be difficult to adapt to other populations. In most cases, a credit scoring model should only be used for the product, range of loan size, and market that it was developed for. When a bank tries to adapt the model to a different population, performance of that population may likely deviate from expectation. When a bank implements or adapts a model to a new market or population for which it was not designed, examiners should determine whether management performs an analysis similar in scope to the one used to validate the model at implementation.
Credit scoring is good at predicting the probability of default but generally not at predicting the magnitude of losses. (Normally, other models, such as loss models, focus on predicting the level (magnitude) of risk.) Generic credit scoring models in particular most likely rank order the risk appropriately but generally do not accurately predict the level of the risk. Thus, banks that use generic models should not assume that their loss rates will be the same as those reflected in industry odds charts. How accounts ultimately perform depends on a number of factors, including account management techniques used, the size of line granted, and so forth.
Scorecards could be considered, by their very nature, to be antiquated when they are put into production. They are based on lengthy historic data and take time to develop. Moreover, models are calibrated using historical data, so if relevant un-modeled conditions change, the model can have trouble forecasting out of sample.
Along similar lines, during times of strong economic growth, models may be ill-prepared to predict borrower performance in recessionary conditions, particularly if the historic period observed did not include recessionary conditions. There are several behaviors that could impact the model‘s effectiveness in recessionary times. One is that consumers might prioritize their payments to pay off secured debt rather than unsecured debt. In hard times, this could leave a bank that is holding the consumer‘s unsecured credit card debt as one of the last to get paid, if paid at all.
The effectiveness of scoring models can also be limited by human involvement. For example, when models are augmented by managerial judgment (for instance, in the case of overrides), results from the model and subsequent validation processes can become seriously compromised. In addition, unsupported overconfidence in the models could lead some banks to move up or down market to make larger or more risky loans, respectively. Without proper model validation, such movements could result in the bank taking on more credit risk than it can control.
Automated Valuation Models
Automated valuation models (AVMs) are sometimes used to support
evaluations or appraisals. Examiners should look at management‘s
periodic validation of AVMs for mitigating the potential valuation
uncertainty in the model and should confirm whether its documentation
covers analyses, assumptions, and conclusions. Validation includes
back-testing a representative sample of the valuations against market
data on actual sales (where sufficient information is available) and
should cover properties representative of the geographic area and
property type for which the tool is used. Many vendors provide a
"confidence score" which usually relates to the accuracy of the value
provided. Confidence scores come in many formats and are calculated
based on differing systems. Examiners should determine whether
management understands how the models work as well as what the
confidence scores mean and should confirm whether management has
identified confidence levels appropriate for the risk in given
transactions.
Summary of Examination Goals – Scoring and Modeling
The examiner‘s role is to evaluate scoring, model usage, and model
governance practices relative to the bank‘s complexity and the overall
importance of scoring and modeling to the bank‘s credit card lending
activities. The role includes:
- Identifying the types of scoring systems used in the credit card lending programs and whether the models are generic, custom, or vendor-supplied. A model inventory is normally available for review.
- Determining how management uses scores in its decision-making processes and whether each model‘s use is consistent with the intended purpose.
- Assessing whether designated staff possess the necessary expertise.
- Determining whether management thoroughly understands the models used.
- Reviewing cut-off scores and odds charts to assess the level of risk being taken.
- Testing the effectiveness of the bank‘s validation function by selectively reviewing various aspects of the bank‘s validation work for key models.
- Evaluating the scope of validation work performed.
- Reviewing reports summarizing validation findings and any additional workpapers necessary to understand findings.
- Evaluating management‘s response to the reports, including remediation plans and timeframes.
- Assessing the qualifications of staff or vendors performing the validation.
- Assessing the bank‘s calibration procedures, including documentation thereof.
- Determining whether credit bureau, behavior, and/or other scores enhance account management and collection practices.
- Assessing override policies and practices.
- Review the number/volume and types of overrides.
- Verify that override reports are reviewed by management and that performance is adequately tracked.
- Determine the impact, if any, of overrides on asset quality.
- Assessing whether the bank‘s audit program appropriately considers models and oversight thereof.
- Identifying instances in which management has taken action when performance of the scoring model deteriorated and determine if the action was appropriate, effective, and timely.
- Determining if management is prepared to take future action if the scoring model‘s performance deteriorates.
- Determining if there are any models under development.
- Identify potential impacts on the bank from implementation of the forthcoming models.
- Understand what prompted the model development.
- Ascertain the planned implementation date of the model.
- For models developed by third parties, assessing whether the systems are supervised and maintained in accordance with vendor-provided specifications and recommendations.
Examiners normally select models for review in connection with the examination when model use is vital or increasing. Focus may also be placed on models new or acquired since the prior examination. Quantitative or information technology (IT) specialists are sometimes needed for some complex models, but examiners normally can perform most model reviews.
Chapter VII. – Underwriting and Loan Approval Process
https://www.fdic.gov/regulations/examinations/credit_card/ch7.html
General Underwriting Considerations
Program-Specific Underwriting Considerations
Affinity and Co-Branding Programs
Private Label Programs
Corporate Card Programs
Subprime Credit Card Programs
Home Equity Credit Card Programs
Cash Secured Credit Card Lending
Purchased Portfolios
Comparison of Automated and Judgmental Processes
Credit Bureau Preferences
Post-Screening
FACT Act
Multiple Accounts
Initial Credit Line Assignments
Policy and Underwriting Exceptions
Indices and Reporting
Summary of Examination Goals – Underwriting and Loan Approval Process
VII. Underwriting and Loan Approval Process
Underwriting is the process by which the lender decides whether an applicant is creditworthy and should receive a loan. An effective underwriting and loan approval process is a key predecessor to favorable portfolio quality, and a main task of the function is to avoid as many undue risks as possible. When credit card loans are underwritten with sensible, well-defined credit principals, sound credit quality is much more likely to prevail.
General Underwriting Considerations
To be effective, the underwriting and loan approval process should
establish minimum requirements for information and analysis upon which
the credit is to be based. It is through those minimum requirements that
management steers lending decisions toward planned strategic objectives
and maintains desired levels of risk within the card portfolio.
Underwriting standards should not only result in individual credit card
loans with acceptable risks but should also result in an acceptable risk
level on a collective basis. Examiners should evaluate whether the
bank‘s credit card underwriting standards are appropriate for the
risk-bearing capacity of the bank, including any board-established
tolerances.
Management essentially launches the underwriting process when it identifies its strategic plan and subsequently establishes the credit criteria and the general exclusion criteria for consumer solicitations. Procedures for eliminating prospects from solicitation lists and certain screening processes could also be considered initial stages of the underwriting and loan approval process in that they assist in weeding out consumers that may be non-creditworthy in relation to the bank‘s risk tolerance level, identified target market, or product type(s) offered.
Compared to other types of lending, the underwriting and loan approval process for credit card lending is generally more streamlined. Increasingly, much of the analytical tasks of underwriting are performed by technology, such as databases and scoring systems. Whether the underwriting and loan approval process for credit cards is automated, judgmental, or a combination thereof, consistent inclusion of sufficient information to support the credit granting decision is necessary. Underwriting standards for credit cards generally include:
- Identification and assessment of the applicant‘s repayment willingness and capacity, including consideration of credit history and performance on past and existing obligations. While underwriting is based on payment history in most instances, there are cases, such as some application strategies, in which guidelines also consider income verification procedures. For example, assessments of income like self employment income, investment income, and bonuses might be used.
- Scorecard data.
- Collateral identification and valuation, in the case of secured credit cards.
- Consideration of the borrower‘s aggregate credit relationship with the bank.
- Card structure and pricing information.
- Verification procedures.
The compatibility of underwriting guidelines with the loan policy, the strategic plan, and the desired customer profile should be assessed. Examiners also determine whether such guidelines are documented, clear, and measurable, such that management can track compliance with and adherence to the guidelines. Moreover, examiners should assess management‘s periodic review process for ensuring that card underwriting standards appropriately preserve and strengthen the soundness and stability of the bank‘s financial condition and performance and are attuned with the lending environment.
In addition to the decision factors, management should also set forth guidelines for the level and type of documentation to be maintained in support of the decision factors. Records typically include, but are not limited to, the signed application, the verified identity of the borrower, and the borrower‘s financial capacity (which may include the credit bureau report or score). In the case of secured cards, records to look for include a collateral evaluation and lien perfection documents. Another item of interest to review includes a method of preventing application fraud such as name and address verification, duplicate application detection, social security number verification, or verification of other application information. The verification level supported by management normally depends upon the loan‘s risk profile as well as the board‘s risk appetite.
The process for altering underwriting terms and standards can involve prominent decisions by management to amend policies and procedures. However, more subtle or gradual modifications to the application of the card underwriting policies and procedures can also produce changes in bank‘s risk profile. For instance, the bank might increase credit limits or target a higher proportion of solicitations to individuals in lower score bands without reducing the minimum credit score. Albeit less apparent, the resultant change can create significant loan problems if not properly controlled. Examiners should assess management‘s records that outline underwriting changes, such as chronology logs, to determine whether the records are well-prepared and complete and to identify underwriting changes that, individually or in aggregate, may substantially impact the quality of accounts booked.
In the hyper-competitive credit card market, some banks may be inclined to relax lending terms and conditions beyond prudent bounds in attempts to obtain new customers or retain existing customers. Examiners should be sensitive to all levels of credit easing and the potential impact of the ease on the bank‘s risk profile. Rapid growth can, but does not necessarily, indicate a decline in underwriting standards. In addition, rising loss rates may indicate a weakening of underwriting criteria. Examiners should also consider that the bank‘s appetite for risk often involves balancing underwriting and the pricing structure to achieve desired results. Thus, management may have priced the products to sufficiently compensate for the increased risk involved in easing credit standards. Take, for example, subprime loans which typically exhibit higher loss rates. They can be profitable, provided the price charged is sufficient to cover higher loss rates and overhead costs related to underwriting, servicing, and collecting the loans. Examiners should sample management‘s documentation that supports credit decisions made. Management‘s documentation might include the contribution to the net interest margin and noninterest income in relation to historical delinquencies and charge-offs compared to other types of card programs. When relaxed credit underwriting is identified, examiners should assess the adequacy of the total strategy.
Results of credit underwriting weaknesses are not limited to elevated credit risk. For example, the weaknesses may cause difficulties in securitization or sales of the underwritten assets, thereby elevating liquidity risk. Further, future credit enhancements and pricing for securitizations may be more costly or less readily available when poorly underwritten receivables adversely affect the bank‘s reputation. In some cases, access to securitization-based funding may vanish. Impairment of a bank‘s reputation as an underwriter can limit accessibility to financial markets or can raise the costs of such accessibility.
Program-Specific Underwriting Considerations
Affinity and Co-Branding Programs
Examiners normally expect banks to refrain from materially modifying underwriting standards for affinity and co-branded card customers. Rather, credit card underwriting guidelines for partnered programs should generally be compatible with the bank‘s loan policy, strategic plan, and desired customer profile. If underwriting practices diverge from the bank‘s normal standards, examiners need to determine the appropriateness of program differences and the overall impact on portfolio quality. They should look for evidence that management has ensured that the eased standards still result in an acceptable level of risk and that any elevated risks are appropriately addressed.Private Label Programs
Examiners should expect management to pay careful attention to the financial condition of the retail partner when it determines whether to offer private label cards. They also normally expect management to refrain from materially modifying underwriting standards to accommodate its retail partners. A retailer that aims to maximize the number of cards in circulation may expect the bank to lower its credit standards. If the bank lowers its credit standards, management should ensure that the standards still result in an acceptable level of risk and that any elevated risks are appropriately addressed.Loss-sharing agreements can be an effective means to mitigate risk and give merchants reason to accept more conservative underwriting standards. With a loss-sharing agreement, either the bank‘s loss rate is capped at a certain percentage or the merchant covers a certain percentage of the dollar volume of losses. The retail partner‘s share of losses can be quite high, and the bank‘s role may be more similar to that of a servicer than a lender. Examiners should analyze management‘s practices for ensuring that the retailer has the financial capacity to cover its portion of the losses. They should also gauge management‘s procedures for analyzing and responding to contingencies, such as if the retailer was to file bankruptcy and the cardholders were not compelled to repay their balances.
Corporate Credit Card Programs
Corporate credit card programs may pose more commercial credit risk than consumer credit risk because the company may be primarily liable for the debt. In cases where the corporation is primarily liable for the debt, examiners should expect that management‘s decision to grant the line of credit is consistent with the institution‘s commercial loan underwriting standards. The credit granting process should also consider relationships that the company has with the bank‘s commercial banking department. Examiners should review the contract terms of corporate credit card programs in a manner similar to how they would review any other commercial loan file. Documentation should include management‘s assessment of the financial condition of the company along with its willingness to pay in a timely manner. Examiners should also ascertain whether the bank or the corporate borrower decides which company employees receive corporate cards. It the borrower decides, examiners should determine what controls the bank uses to reduce risk.Subprime Credit Card Programs
Subprime lending is generally defined as providing credit to consumers who exhibit characteristics that suggest a much higher risk of default as compared to the risk of default with traditional bank loan customers. Examiners should evaluate whether management has carefully attended to underwriting standards for subprime credit card programs. Underwriting for subprime credit cards is usually based upon credit scores generated by sophisticated scoring models, which use a substantial number of attributes to determine the probability of loss for a potential borrower. Those attributes often include the frequency, severity, and recency of delinquencies and major derogatory items, such as bankruptcy. When underwriting subprime credit cards, banks generally use risk-based pricing as well as tightly controlled credit limits to mitigate the increased credit risk evident in the consumer‘s profile. Banks may also require full or partial collateral coverage, typically in the form of a deposit account at the bank. Credit availability and card utility concerns are other important considerations.Home Equity Credit Card Programs
Home equity lending in general has recently seen rapid growth and eased underwriting standards. The quality of real estate secured credit card portfolios is usually subject to increased risk if interest rates rise and/or home values decline. As such, sound underwriting practices are indispensable in mitigating this risk. Examiners should look for evidence that management considers all relevant risk factors when establishing product offerings and underwriting guidelines. Generally, these factors include borrowers‘ income and debt levels, credit score (if obtained), and credit history, as well as loan size, collateral value (including valuation methodology), and lien position. Examiners should determine whether effective procedures and controls for support functions, such as perfecting liens, collecting outstanding loan documents, and obtaining insurance coverage, are in place.For real estate secured programs, compliance with the following guidance is considered:
- Part 365 of the FDIC Rules and Regulations – Real Estate Lending Standards, including Appendix A which contains the Interagency Guidelines for Real Estate Lending Policies.
- Interagency Appraisal and Evaluation Guidelines.
- Interagency Guidance on High Loan-to-Value Residential Real Estate Lending.
- Home Equity Lending Credit Risk Management Guidance issued May 24, 2005.
Other laws, several of which are reviewed during the compliance examination, also apply.
Part 365 requires banks to maintain written real estate lending policies that are consistent with sound lending principles and appropriate for the size of the institution as well as the nature and scope of its operations. It specifically requires policies that include, but are not limited to:
- Prudent underwriting standards, including LTV limits.
- Loan administration procedures.
- Documentation, approval and reporting requirements.
Consistent with the agencies regulations on real estate lending standards, prudently underwritten home equity credit card loans should include an evaluation of a borrower‘s capacity to adequately service the debt. Considering the real estate product‘s sizable credit line typically extended, an evaluation of repayment capacity should most often consider a borrower‘s income and debt levels and not just the borrower‘s credit score. A prominent concern is that borrowers will become overextended, and the bank may have to consider foreclosure proceedings. As such, underwriting standards should emphasize the borrower‘s ability to service the card line from cash flow rather than the sale of the collateral. If the bank has offered a low introductory rate, repayment capacity should consider the rate that could be in effect at the conclusion of the introductory term.
A potentially dangerous misstep in underwriting home equity credit cards is placing undue reliance upon a property‘s value in lieu of an adequate initial assessment of an applicant‘s repayment ability. However, establishing adequate real estate collateral support in conjunction with appropriately considering the applicant‘s repayment ability is a sensible and necessary practice for home equity credit card lending.
Examiners should expect that management has established criteria for determining an appropriate real estate valuation methodology (for example, higher-risk accounts should be supported by more thorough valuations) and requires sufficient documentation to support the collateral valuation. Banks have streamlined real estate appraisal and evaluation processes in response to competition, cost pressures, and technological advancements. These changes, coupled with elevated LTV risk tolerances, have heightened the importance of strong collateral valuation policies and practices. The Interagency Appraisal and Evaluation Guidelines sets forth expectations for collateral valuation policies and procedures. Use of automated valuation models (AVMs) and other collateral valuation tools for the development of appraisals and evaluations is increasingly popular. AVMs are discussed in the Scoring and Modeling chapter.
Management is expected to establish limitations on the amount advanced in relation to the value of the collateral (LTV limits) and to take appropriate measures to safeguard its lien position. Examiners should determine whether management verifies the amount and priority of any senior liens prior to the loan closing when it calculates the LTV ratio and assesses the collateral‘s credit support. The Interagency Guidelines for Real Estate Lending Policies (Appendix A to Part 365) and the Interagency Guidance on High LTV Residential Real Estate Lending address LTV considerations, including supervisory LTV limitations. There are several factors besides LTV limits that influence credit quality. Therefore, credit card loans that meet the supervisory LTV limits should not automatically be considered sound, and credit card loans that exceed the supervisory LTV limits should not automatically be considered high risk. Examiners should refer to the mentioned guidance and to the Risk Management Manual of Examination Policies for LTV details, such as reporting requirements and aggregate limits in relation to capital levels.
Cash Secured Credit Card Lending
While cash secured credit card lending may be less susceptible to credit risk than other types of credit card lending, credit risk is not eliminated. The outstanding balance on an account could exceed the collateral amount either due to the account being only partially collateralized at account set-up or due to allowing the cardholder to go over-limit. Partially secured cards represent unsecured credit to higher-risk consumers to the extent that the line or balance exceeds the deposit amount. Underwriting for these types of accounts (as well as for those fully secured) should clearly substantiate the consumer‘s willingness and ability to service the debt.Examiners should verify whether management has established clear underwriting policies and practices for cash secured lending. These polices should include, among other items, guidelines for credit limit assignments in relation to the amount of collateral required. Examiners should also determine management‘s practices for performing credit analysis on the applicant, which may include verifying the applicant‘s income, and for ensuring that a perfected security interestin the deposit is established and maintained. If the bank retains possession of the deposit, its security interest in the deposit is generally perfected.
Purchased Portfolios
Similar to expectations for partnership agreements (that is, co-branded and similar programs), examiners should expect that the bank refrain from materially modifying underwriting standards when it purchases portfolios of credit card receivables. If underwriting criteria are eased in comparison to the banks‘ internally-established underwriting criteria it could result in elevated credit risk that management would need to take appropriate action for, which may include holding higher levels of loss allowances, hiring additional collectors, and so forth. And, if the cardholder base is significantly different than that normally held by the bank, management could be at risk of not fully understanding the expectations of those cardholders, thereby raising reputation risk. Examiners should confirm whether management considers underwriting criteria used by originators in its due diligence processes for portfolio purchases. If underwriting criteria for purchased portfolios diverge from the bank‘s typical underwriting standards, examiners need to determine the appropriateness of the differences in relation to management‘s capabilities and to the overall impact on portfolio quality and the bank‘s risk profile. Purchased credit card portfolios are discussed in the Purchased Portfolios and Relationships chapter.
Comparison of Automated and Judgmental Processes
Once a consumer completes an application, the application either is
processed through an automated processing system or is processed
manually, or judgmentally. Regardless of the type of process used, the
audit department should examine it with any deviations communicated
promptly to management.
Automated underwriting and loan approval processes are increasingly popular and vary greatly in complexity. In an automated system, credit is generally granted based on the cut-off score and the desired loss rate. These systems are often based on statistical models and apply automated decision-making where possible. Banks sometimes establish auto-decline or auto-approve ranges where the system either automatically approves or declines the applicant based on established criteria, such as scores. The automated systems may also incorporate criteria other than scores (such as rules or overlays) into the credit decision. For example, the presence of certain credit bureau attributes (such as bankruptcy) outside of the credit score itself could be a contributing factor in the decision-making process. Examiners should gauge management‘s practices for validating the scoring system and other set parameters within automated systems as well as for verifying the accurateness of data entry for those systems.
Judgmental underwriting processes also vary in complexity but are not as popular as they have been in the past, mainly due to advances made in automated underwriting processes. While not as popular, judgmental processes are preferred and/or necessary in some cases, such as if the bank cannot (or does not want to) pay the amount necessary to establish and maintain such systems or if the portfolio is very small and perhaps consists of the bank‘s traditional customers. In a judgmental process, credit is granted based on a manual review using the bank‘s underwriting guidelines which guide the quality of new accounts. The bank‘s control systems for ensuring that analysts consistently follow policy should undergo review during the examination.
When an applicant is approved or denied contrary to a system‘s recommendations or guidelines, it is usually called an override. For example, if the applicant falls outside of the auto-approval range, the applicant may be referred for manual review in certain cases. As such, the applicant may be approved despite not meeting the system‘s criteria, which is called a low-side override. And, in other cases, applicants that would be auto-approved might be referred for manual review and declined based on rules or other guidelines that management has established or authorized, which is referred to as a high-side override. High-side overrides generally occur when derogatory information becomes known to management.
The following types of overrides are commonly encountered:
- Informational overrides occur when information not included in the scoring process becomes known to management.
- Policy overrides occur when management establishes special rules for certain types of applications.
- Intuitional overrides occur when management makes decisions based on judgment rather than the scoring model.
Scoring is a predominant feature of most automated underwriting and loan approval processes. When scoring is used to grant credit, quality is controlled by setting the cut-off score at the desired loss rate. Credit scoring is discussed in the Scoring and Modeling chapter.
Credit Bureau Preferences
Information in a consumer‘s credit file is not necessarily identical
across all credit bureaus. Banks often maintain a table reflecting
preferences for certain credit bureaus to be used in the underwriting
(and account management) process. The table is usually based on the
geographic locations targeted. Management‘s periodic analysis of bureau
preference to determine optimal credit bureaus for different states or
localities should be subject to examination review. Optimal credit
bureaus are generally described as giving the most comprehensive,
accurate, relevant, and timely information on the consumer such that the
bank can make the most informed credit and pricing decisions possible
based on available information.
Post-Screening
Post screening is a supplementary risk management tool. Sound
pre-screening criteria is a first-line of defense against taking on
undesirable accounts, and post-screening will not correct poor selection
criteria. Nevertheless, it can effectively reduce the exposure from
undesirable accounts. Post-screening is a process used in conjunction
with pre-screened solicitations to identify potentially bad, versus
good, accounts. New credit reports are obtained for respondents after
the consumer accepts the pre-screened offer and are reviewed for
negative information established after the pre-screened list was created
or missed in the initial screening.
The FCRAsignificantly limits an institution‘s ability to deny credit once an offer has been accepted. Nevertheless, in some situations, management may be able to take action to reduce risk to the bank. A pre-screened credit offer may be withdrawn in certain situations. Bankruptcy, foreclosures, judgments, attachments, and similar items may be grounds for withdrawing an offer if such items occurred between the prescreen and the consumer‘s acceptance ONLY if these criteria were part of the original prescreening. Identifying and rejecting these potentially bad accounts reduces the bank‘s exposure to loss. In addition, an institution is not required to grant credit if the consumer is not creditworthy or cannot furnish required collateral, provided that the underwriting criteria are determined in advance and applied consistently. If the consumer no longer meets the lender‘s predetermined criteria, the lender is not required to issue the credit card. For example, if the cut-off score in the predetermined criteria is 700 and the consumer‘s credit score has deteriorated to 695 at the time of post-screen, the institution would most likely not be required to issue the credit card. However, if the consumer‘s score fell from 780 to 704, the institution would still have to grant credit because the consumer met the pre-determined standard. Depending on the specifics of the offer, the bank might be able to reduce the size of the line extended, provided that any relationship between the credit score and the amount of credit line given is also pre-determined by the institution before the offer was made.
FACT Act
In addition to marketing considerations, certain FACT Act provisions
are applicable to the underwriting and origination process. Section 112
of the FACT Act addresses fraud alerts and active duty status alerts.
According to the provisions, prospective users of a consumer credit
report that reflects fraud alerts or active duty alerts generally may
not establish certain new credit plans or extensions of credit in the
name of the consumer unless certain criteria are met, including
specified verification or contact procedures. Credit cannot be denied
based on the existence of a fraud alert or active duty alert. Rather,
the bank must use the specified methods to verify the identity of
consumers with such alerts on their records. In addition, FACT Act
provisions provide that certain entities that make or arrange certain
mortgage loans secured by 1-4 family properties for consumer purposes
using credit scores must provide the score and a standardized disclosure
to the applicants. Examiners should familiarize themselves with FACT
Act provisions and consult with their compliance counterparts if they
run across concerns.
Multiple Accounts
Without proper controls, multiple account strategies can rapidly and
significantly increase the bank‘s risk profile. The elevated risk
profile may come in many forms, such as excessive credit risk or
elevated reputation risk. Ill-managed multiple account strategies can
exacerbate portfolio deterioration trends.
Management‘s practices for considering the bank‘s entire relationship with an applicant, including, but not limited to, any other existing card accounts, should be incorporated into the examination review. The bank‘s system to aggregate related credit exposures should also undergo review. In extreme cases, some banks have granted additional accounts to borrowers who were already experiencing payment problems on their existing accounts. Examiners should expect management to carefully consider and document its decision to offer multiple accounts, especially when the products offered are accompanied by substantial fees that limit credit availability and card utility. A best practice for management is to identify why use of a multiple account strategy was selected as compared to use of a line increase program. For banks that offer multiple credit lines, examiners should see evidence that management has established sufficient reporting to show items such as count, balance, and performance of cardholders that hold more than one account. They should also determine whether management compares the performance of multiple account portfolios against the performance of portfolios where each cardholder maintains only one account. Regulators can and have required banks to discontinue multiple account strategies when management has not provided for these necessary and appropriate management tools and reporting. If multiple account strategies are not offered, examiners should evaluate how management prevents multiple accounts from being issued.
Initial Credit Line Assignments
With the profitability potential that credit cards typically exude,
issuers usually try to assign the highest credit lines possible. But,
the potential rewards must be balanced with the risks for the programs
to be effective. Thus, it is critical that initial credit line
assignments are based on sound credit information. Inadequate analysis
of repayment capacity usually results in consumers receiving higher
credit lines than they may be able to service and the risk of default
heightening.
Criteria for line assignments varies but is often based on a combination of credit bureau score, income level, and/or other criteria. In any case, the credit lines assigned should be commensurate with the consumer‘s creditworthiness and ability to repay in accordance with soundly-established terms, including emphasis on a reasonable amortization period. As discussed in the Marketing and Acquisition chapter, some banks assign the credit line up front and disclose the line to the consumer as part of the pre-approved offer while other banks assign the credit line on the back end, such as by offering the consumer a credit limit up to a maximum amount in the solicitation. For back-end credit line assignment, the amount of the credit line is not assigned until the consumer responds to the solicitation.
As discussed in that chapter, compliance, credit, reputation, and other risks may arise depending on credit availability and card utility at account opening. Banks that offer products with limited credit availability or card utility at account opening are expected to maintain careful and thorough analysis demonstrating that the product will be and is marketed and underwritten in such a way to fully address the various accompanying safety and soundness and consumer protection concerns raised by such products.
Policy and Underwriting Exceptions
Policy and underwriting exceptions
are conditions in approved loans that contravene the bank‘s lending
policies or underwriting guidelines. In an automated approval
environment, policy exceptions should be rare. However, if the
underwriting process includes a judgmental element, overrides are more
likely to occur. Examiners should look for evidence that management has
provided guidelines and limitations for granting loans that do not
conform to the lending policy and underwriting guidelines and that it
has established procedures for tracking and monitoring loans approved as
exceptions.
Tracking exceptions is a valuable tool for several reasons. In addition to aiding the assessment of portfolio risk profiles and the adequacy of loss allowances, it helps hold staff accountable for policy compliance and reassess the appropriateness of existing policies and practices.
Exceptions are tracked both on an individual and aggregate basis. Tracking the aggregate level of exceptions is common and helps detect shifts in the risk characteristics of the credit card portfolio. It facilitates risk evaluation and helps management identify new business and training opportunities. Analysis of aggregate exceptions eventually enables management to correlate particular types of exceptions with a higher probability of default. Policy and underwriting exceptions that are viewed individually might not appear to substantially increase risk. But, when aggregated, those same exceptions can considerably increase portfolio risk. As such, early detection and analysis of adverse trends in exceptions is a necessary element for ensuring timely and appropriate corrective action.
An excessive or increasing volume or a pattern of exceptions could signal unintended or unwarranted relaxation in underwriting practices. If the volume of exceptions is high, management may be prompted to reconsider its risk tolerance, revise policies to be better aligned with the credit culture or current market conditions, establish new limits on the volume of exceptions, or change the type of exceptions permitted. When management has revised policies in response to high volumes of exceptions, examiners should assess the implications of the revisions, including impacts on the bank‘s risk profile.
While high volumes of exceptions may indicate increased risk, so can a lack of exceptions. A lack of exceptions may indicate that the policy is too general to set clear limits on underwriting risk. If examiners identify an absence of exceptions, they should carefully review the bank‘s policies to ascertain whether such policies provide adequate and clear guidance and limits.
Examiners should gauge the sufficiency of portfolio managers‘ procedures for comparing the performance of exception loans with that of loans made within established guidelines. To facilitate comparison, management often uses exception coding and retains it even if the condition triggering the coding no longer exists. Examiners should review management‘s practices for dropping exception codes or re-coding and should identify whether the practices skew or spoil the effectiveness of exception tracking.
Indices and Reporting
A variety of indices are available regarding the underwriting
function and its relationship with the marketing function. Items of
interest include, but are not limited to:
- Origination cost per account, which is the total origination cost over a measurement period in relation to the number of accounts that were originated during that same period. It measures the cost of establishing a new account relationship.
- Approval rate, which is the number of accounts approved over a measurement period in relation to the number of applications (or responses) received.
- Booking rate, which is the number of accounts actually booked over a measurement period in relation to either the number of approved accounts or the number of applications (or responses) received. In some instances the customer applies for credit but then declines the offer after approved.
- Override rate, which is the number of overrides over a measurement period in relation to the number of applicants in the population.
- High-side override rate, which is the number of applicants over the cut-off score who were denied credit in relation to the number of applicants over the cut-off score.
- Low-side override rate, which is the number of applicants below the cut-off score who were given credit in relation to the number of applicants below the cut-off score.
- Application processing time, which is the amount of time it takes the institution to process the application from the time of receipt to the point the credit decision is made and communicated to the consumer.
Portfolio problems can frequently be traced back to the bank‘s business generation and underwriting practices. Management is expected to devote sufficient resources to analyze changes in underwriting and credit scores, use appropriate systems and analytical tools to examine the results, and monitor warning signs of market deterioration. Common reports found in the underwriting department include, but are not limited to, those detailing policy changes, average credit score for new accounts, average initial credit lines assigned, approval rates, booking rates, and costs associated with the marketing and underwriting functions. Examiners should also determine whether management is monitoring reports as detailed in the Multiple Account Strategies and Policy and Underwriting Exceptions sections of this chapter. They should also look for evidence that management is using appropriate and sufficient segmentation techniques (program type, vintage, marketing channel, score distribution, etc.) within its reporting and is frequently monitoring marketing reports, usually no less than monthly.
Summary of Examination Goals – Underwriting and Loan Approval Process
Review of the underwriting and loan approval process is important
because the goal of the examination is not only to identify current
portfolio problems, but to identify potential problems that may arise
from ineffective policies, unfavorable trends, lending concentrations,
or non-adherence to policies. Examiners normally review items such as:
- The structure of the underwriting department and the expertise of its staff.
- Applicable board and/or committee minutes (in coordination with the examiner-in-charge).
- Underwriting policies and procedures.
- Underwriting chronology logs or similar documents summarizing changes in the underwriting and loan approval process.
- Planned underwriting and loan approval changes.
- Management reporting, tracking, and monitoring, including department statistics, portfolio statistics, and other segmentation statistics.
- Automated underwriting systems.
- Controls over judgmental underwriting processes.
- Management‘s identification of and response to adverse trends in the underwriting and loan approval area.
- Audits or other reviews of the underwriting and loan approval function.
The following items might signal current or future elevated risk and, thus, might warrant follow-up:
- Excessive or rapidly rising approval rates.
- Frequent or substantial changes in underwriting criteria.
- High employee turnover in the department.
- High or increasing exception volumes.
- Extremely low or non-existent exception volumes.
- High or increasing volume of accounts closed shortly after booking.
- Adverse performance of multiple account holders compared to cardholders with only one account.
- Few or ineffective management reports.
- Trends in the credit score distribution toward higher-risk accounts.
- High or increasing volume of consumer complaints.
- Credit lines inconsistent with products offered or with the target market‘s risk profile.
- Trends showing marked changes in average assigned lines.
These lists are not exhaustive, and examiners must exercise discretion in determining the expanse and depth of examination procedures to apply. If examiners identify significant concerns, they should expand procedures accordingly.
以上是关于Scoring and Modeling—— Underwriting and Loan Approval Process的主要内容,如果未能解决你的问题,请参考以下文章
Modeling Filters and Whitening Filters
讲座:Modeling User Engagement for Ad and Search
LTE - Release 10 PUSCH Multiple Codeword Transmit and Receive Modeling
SCINet:Time Series Modeling and Forecasting with Sample Convolution and Interaction学习记录
自动驾驶 4-7 Lesson 7: Tire Slip and Modeling
Vehicle-to-Vehicle Channel Modeling and Measurements: Recent Advances and Future Challenges