The History of Appraisal Data

“The good is often the enemy of the best.” (Bill W.)

Traditional appraisal has been good.  Up to just a few years ago, even at predictive methods and collateral risk conferences, I often heard the words:  A real appraisal is the ‘gold standard’.  What happened?  Did appraisers suddenly get worse?

I don’t think so.  Appraisers are stuck in a system, a model, an “appraisal process” which has become outdated.

How’d this happen?  It’s history . . .

In a presentation at the 2014 Appraisal Institute Conference in Austin Texas (August 6, 2014), I divided the history of data into four eras:

  • The Data Rummage Era, where the primary appraisal focus was the collection of difficult data. Appraisers generally ‘owned’ the data, and it was part of their business’ value.  (1920’s – 1970’s)
  • The Market Analysis Era, where data became sufficiently available (such as MLS books and printed commercial data sources) – raised the awareness of competitive market segments. But appraisers continued to reckon markets from the similar properties they were able to find.  Three to six report comparables remained the accepted protocol.  (1980’s – 1990’s)
  • The Data Discarding Era, where complete or substantially complete data sets became electronically available (the quality of that data is a separate topic), but appraisers continued the subjective practice of selecting three to six report comparables. “Junk science,” particularly in litigation situations seems to have flourished from the mismatch of data available and subjective data selection.  (1990’s – 2000’s)
  • The Data Optimization Era is now upon us. It is the awareness that there is an optimal amount of data that produces the best analytical precision, the most accurate prediction, and a measurably reproducible result.

What happened is that our education began to fall behind the world reality.  We continued to use a handful of convenient, subjectively selected “comps.”  We pressed on — even as complete market information was available in a second.  We discarded it.  It just didn’t fit the model we were taught.  Worse yet, little or no education was available to show how to analyze the larger data (except for perhaps the excellent AI course in Highest and Best use and Market Analysis.  Even there, the emphasis on the market was detached from selecting a comparable data set.  The appraiser still used a handful of anecdotally picked comps to “support” his/her opinion.  The concept was:  support your opinion, rather than analyze to a result.

What should we be doing?

Data is important.  Data becomes useful information in two ways:  1) sort/arrange/transform;  2) analyze/estimate/predict.  Information is data made useful. There’s a tradeoff between the amount of data used, and the precision of the result.  It’s the amount of information that needs to be maximized.  Data becomes useful information until it’s garbage.  Too much data means the reliability of the result becomes questionable (bad precision).  Too little relevant data means there is information loss.

Appraisal profession and appraisal vocationals are mired in intentional information loss because of an education system which continues to require:

  • Subjective data selection, because there is no working definition of “what is a comparable?”
  • Data discarding because no attention is given to information optimization;
  • Reporting which emphasizes subjective storytelling, rather than objective analytics.

Lest I be charged with discounting the “art” of valuation . . .   There is art to science.  But it’s a new art, called modeling.  It’s the mark of a competent valuation data scientist.

Valuation data science is the hope of the valuation profession.  “Trust me” no longer works.  We must do what we were taught in the third grade:  “Show your work!”

Future writings will present a clear path to save the profession.  A clear path to enable an authentic “high level of public trust.”  A workable path that can truly help prevent the next economic crisis.

It can be done.