Can we compare “statistical adjustment” methods to traditional appraisal adjustment practices?

‘Adjustment’ is mentioned only once in USPAP; (Standard 5 on mass appraisal).  However, in The Appraisal of Real Estate, “adjust” comes up 687 times!  It must be important.  Surely, it’s clearly explained.

In each use, the context is critical.  It states:

  • ‘Adjustment’ builds on the economic principles of balance, contribution, surplus productivity, and conformity.
  • Market analysis is the basis for adjustments in the traditional ‘three approaches.’

In The Valuation Process chapter, it mentions the ten various elements of comparison. It states “Dollar or percentage adjustments are then applied to the known sale price of each comparable property to derive an indicated value for the subject property. Qualitative analysis techniques may also be applied for elements of comparison for which quantitative adjustments cannot be developed.”

The Scope of Work chapter says that adjustments should be ‘supportable’ (able to be supported).  In the Data Collection chapter, the book shows how to put them into a mathematical ‘grid’ format, whether you support them or not.  Also, comparable properties should have the same highest and best use.

The chapter titled Statistical Analysis in Appraisal reports that “Statistical techniques like regression analysis have become accepted tools in the application of the approaches to value.”

The Comparative Analysis chapter states “Several techniques are available” such as:  paired data, grouped data, secondary data, statistical and graphic and scenario, cost-related, and income differences.  It emphasizes that “extreme care” should be used, on “truly comparable” data, and that other differences “do not exist.”  Unfortunately, this situation never exists!  Then a ‘grouped’ method for time adjustment is recommended.  It’s too bad that this method involves statistical “information loss” which created much of the problem with the former Fannie Mae form 1004MC market analysis.  This method sometimes gives results in the wrong direction!

So, the traditional adjustment methods are explained in words, on what is really a mathematical or statistical process.  So how does this compare to adjustment in modern evidence-based analytics?

In statistical analysis, “adjustment” mostly concerns selection bias, to fix imperfect ‘comps”.  Selection bias for appraisers comes from ‘picking comps.’  Bias can be intentional or unintentional.  It’s important to distinguish between human bias and model (algorithm) bias.

  • Intentional bias is where the analyst consciously tries to ‘hit a number’. This usually goes with diligence to look unbiased.  This takes great effort and skill in itself!
  • Subconscious bias, such as psychological anchoring to a number (like a sale price).  This is the human desire to please another, or perhaps a motive to not lose a source of income.
  • Unintentional bias can come from two sources.
    • The market data (comps) is limited, so there is randomly created data bias.
    • The data selection from model bias. This can come from lack of analyst competence, or from reliance on ‘canned’ or purchased software.
  • Approximation bias, which may come from random sampling, is inappropriate for typical valuation work.

In the Evidence Based Valuation© (EBV) curriculum, we resolve adjustments into three basic types:

  • Those which can be calculated with math (deterministic)
  • Those which can be estimated statistically (probabilistic)
  • Those which are biased, but still useful (asymptotic)

In the data science approach (EBV©) to asset analytics, adjustments tend to be much less prickly.  This is because of clearly defined and complete data selection.  The result is that usually adjustments become obvious and self-supporting.  Adjustments are a result of the analysis, not an opinion to support.

In the Valuemetrics.info education, we learn to apply easy methods to clarify or even eliminate adjustment problems.