Standards: Credible v Reliable.
Editor’s Note: This is Standards, part 3.4 of George Dell’s series on How Do I Move to EBV? Links to the earlier posts are here.
Here, we specifically note how the goal of having valuation standards can be updated to reflect today’s complete data, and computation analytics can be optimized for the public trust. The ultimate goal of USPAP (Appraisal Foundation) is that an appraisal be ‘credible.’ The goal of Evidence Based Valuation (EBV©) is reliability.
Here, we will look at the differing definitions of ‘credible’ versus ‘reliable,’ the valuation process for each, changes in technology, and the underlying econometric theory for each.
Definitions:
Credible is defined as “worthy of belief” in USPAP (Uniform Standards of Professional Appraisal Practice). “Belief worthiness” is wholly subjective, and provides no measure of uncertainty and risk.
Reliable is universally defined as consistent “freedom from random error” from one use to the next. It enables auditability and scoring risk and certainty.
Process differences:
In traditional appraisal, the appraiser picks some comps, makes some adjustments, and reconciles differences. All three elements — selection, adjustment, and justification — use subjective judgment.
In Evidence Based Valuation (EBV)©, the competitive market is identified, predictive algorithms are applied, and a reliability/risk score is calculated. All three elements are based on factual data.
Tech differences:
Vintage practice, when it first evolved — solved the problem of sparse, difficult-to-collect data. It required refinement and confirmation. It relied on experience, area familiarity, and related judgment training. Data selection was by personal judgment, convenience, and personal connection. These habits continue, even as data and analytic and visualization technology has dramatically changed.
Data science practice applies the ‘let the data speak’ principle. It reflects the actual behavior of buyers, sellers, and agent/lender influences. Expert judgment refines and sharpens underlying detail. It prevents sweeping generality and intuition. Visualization and graphs are the primary brain/machine interface.
Theory differences:
Economic and human behavior theory underlies both traditional and data science approaches. The difference is:
- Assess all relevant data, rather than compare a choice of hand-picked ‘comps.’
- Replace the fiction of calculated-adjustments with a real measure of uncertainty.
We measure markets, not compare comps.
In practice:
Credibility: (Belief worthiness) cannot be measured. It can only be reviewed or personally judged. And the review also is subject to the same “worthiness of belief” as the original appraisal opinion. An opinion cannot be challenged as being right or wrong. It can only be agreed to, or disagreed with. USPAP has only three documentable “violations.” 1) failure to identify the client; 2) failure to keep records; and 3) committing gross negligence (presumably minimal or moderate negligence is ok).
Reliability (sureness or certainty) can be measured. Facts and assumptions can be verified. Data can be audited. Algorithms can be replicated. Reliability is substantially objective in nature. Therefore, the process and the results can be audited. Most importantly, the obverse of reliability is risk. Users and the public trust need to know risk: The risk of collateral loss, investment failure, or fairness. Fairness applies to litigation, tax assessments, and racial bias.
The goal of reliability clarifies analytic bias, and disables personal bias (conscious or unconscious).
Vintage USPAP can be replaced with objective, measurable valuation standards. Data-science-based standards will be followed by a best-practices series in this weekly blog and elsewhere from Valuemetrics.info, GeorgeDell.com, and the CAA (Community of Asset Analysts).